00:00:00.001 Started by upstream project "autotest-nightly" build number 4310 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.142 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.142 The recommended git tool is: git 00:00:00.142 using credential 00000000-0000-0000-0000-000000000002 00:00:00.144 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.279 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.279 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.807 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.820 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.831 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.831 > git config core.sparsecheckout # timeout=10 00:00:08.842 > git read-tree -mu HEAD # timeout=10 00:00:08.858 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.883 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.883 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.990 [Pipeline] Start of Pipeline 00:00:09.006 [Pipeline] library 00:00:09.008 Loading library shm_lib@master 00:00:09.008 Library shm_lib@master is cached. Copying from home. 00:00:09.021 [Pipeline] node 00:00:09.034 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:09.035 [Pipeline] { 00:00:09.047 [Pipeline] catchError 00:00:09.048 [Pipeline] { 00:00:09.061 [Pipeline] wrap 00:00:09.069 [Pipeline] { 00:00:09.077 [Pipeline] stage 00:00:09.078 [Pipeline] { (Prologue) 00:00:09.097 [Pipeline] echo 00:00:09.099 Node: VM-host-SM38 00:00:09.105 [Pipeline] cleanWs 00:00:09.116 [WS-CLEANUP] Deleting project workspace... 00:00:09.116 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.122 [WS-CLEANUP] done 00:00:09.382 [Pipeline] setCustomBuildProperty 00:00:09.492 [Pipeline] httpRequest 00:00:09.962 [Pipeline] echo 00:00:09.965 Sorcerer 10.211.164.101 is alive 00:00:09.975 [Pipeline] retry 00:00:09.977 [Pipeline] { 00:00:09.993 [Pipeline] httpRequest 00:00:09.998 HttpMethod: GET 00:00:09.998 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.999 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.021 Response Code: HTTP/1.1 200 OK 00:00:10.022 Success: Status code 200 is in the accepted range: 200,404 00:00:10.022 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.077 [Pipeline] } 00:00:35.095 [Pipeline] // retry 00:00:35.102 [Pipeline] sh 00:00:35.392 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.410 [Pipeline] httpRequest 00:00:35.831 [Pipeline] echo 00:00:35.833 Sorcerer 10.211.164.101 is alive 00:00:35.844 [Pipeline] retry 00:00:35.846 [Pipeline] { 00:00:35.861 [Pipeline] httpRequest 00:00:35.866 HttpMethod: GET 00:00:35.867 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:35.868 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:35.887 Response Code: HTTP/1.1 200 OK 00:00:35.887 Success: Status code 200 is in the accepted range: 200,404 00:00:35.888 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:24.626 [Pipeline] } 00:01:24.645 [Pipeline] // retry 00:01:24.654 [Pipeline] sh 00:01:24.938 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:28.257 [Pipeline] sh 00:01:28.541 + git -C spdk log --oneline -n5 00:01:28.541 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:28.541 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:28.541 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:28.541 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:28.541 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:28.562 [Pipeline] writeFile 00:01:28.577 [Pipeline] sh 00:01:28.861 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:28.876 [Pipeline] sh 00:01:29.166 + cat autorun-spdk.conf 00:01:29.166 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.166 SPDK_TEST_NVME=1 00:01:29.166 SPDK_TEST_FTL=1 00:01:29.166 SPDK_TEST_ISAL=1 00:01:29.166 SPDK_RUN_ASAN=1 00:01:29.166 SPDK_RUN_UBSAN=1 00:01:29.166 SPDK_TEST_XNVME=1 00:01:29.166 SPDK_TEST_NVME_FDP=1 00:01:29.166 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.175 RUN_NIGHTLY=1 00:01:29.177 [Pipeline] } 00:01:29.190 [Pipeline] // stage 00:01:29.205 [Pipeline] stage 00:01:29.208 [Pipeline] { (Run VM) 00:01:29.221 [Pipeline] sh 00:01:29.506 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.506 + echo 'Start stage prepare_nvme.sh' 00:01:29.506 Start stage prepare_nvme.sh 00:01:29.506 + [[ -n 4 ]] 00:01:29.506 + disk_prefix=ex4 00:01:29.506 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:29.506 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:29.506 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:29.506 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.506 ++ SPDK_TEST_NVME=1 00:01:29.506 ++ SPDK_TEST_FTL=1 00:01:29.506 ++ SPDK_TEST_ISAL=1 00:01:29.506 ++ SPDK_RUN_ASAN=1 00:01:29.506 ++ SPDK_RUN_UBSAN=1 00:01:29.506 ++ SPDK_TEST_XNVME=1 00:01:29.506 ++ SPDK_TEST_NVME_FDP=1 00:01:29.506 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.506 ++ RUN_NIGHTLY=1 00:01:29.506 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:29.506 + nvme_files=() 00:01:29.506 + declare -A nvme_files 00:01:29.506 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.506 + nvme_files['nvme.img']=5G 00:01:29.506 + nvme_files['nvme-cmb.img']=5G 00:01:29.506 + nvme_files['nvme-multi0.img']=4G 00:01:29.506 + nvme_files['nvme-multi1.img']=4G 00:01:29.506 + nvme_files['nvme-multi2.img']=4G 00:01:29.506 + nvme_files['nvme-openstack.img']=8G 00:01:29.506 + nvme_files['nvme-zns.img']=5G 00:01:29.506 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.506 + (( SPDK_TEST_FTL == 1 )) 00:01:29.506 + nvme_files["nvme-ftl.img"]=6G 00:01:29.506 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.506 + nvme_files["nvme-fdp.img"]=1G 00:01:29.506 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.506 + for nvme in "${!nvme_files[@]}" 00:01:29.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:29.506 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.506 + for nvme in "${!nvme_files[@]}" 00:01:29.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:29.774 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.774 + for nvme in "${!nvme_files[@]}" 00:01:29.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:30.035 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:30.035 + for nvme in "${!nvme_files[@]}" 00:01:30.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:30.035 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.035 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:30.035 + echo 'End stage prepare_nvme.sh' 00:01:30.035 End stage prepare_nvme.sh 00:01:30.047 [Pipeline] sh 00:01:30.330 + DISTRO=fedora39 00:01:30.330 + CPUS=10 00:01:30.330 + RAM=12288 00:01:30.330 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.330 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:30.330 00:01:30.330 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:30.330 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:30.330 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:30.330 HELP=0 00:01:30.330 DRY_RUN=0 00:01:30.330 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:30.330 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:30.330 NVME_AUTO_CREATE=0 00:01:30.330 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:30.330 NVME_CMB=,,,, 00:01:30.330 NVME_PMR=,,,, 00:01:30.330 NVME_ZNS=,,,, 00:01:30.330 NVME_MS=true,,,, 00:01:30.330 NVME_FDP=,,,on, 00:01:30.330 SPDK_VAGRANT_DISTRO=fedora39 00:01:30.330 SPDK_VAGRANT_VMCPU=10 00:01:30.330 SPDK_VAGRANT_VMRAM=12288 00:01:30.330 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.330 SPDK_VAGRANT_HTTP_PROXY= 00:01:30.330 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.330 SPDK_OPENSTACK_NETWORK=0 00:01:30.330 VAGRANT_PACKAGE_BOX=0 00:01:30.330 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:30.330 FORCE_DISTRO=true 00:01:30.330 VAGRANT_BOX_VERSION= 00:01:30.330 EXTRA_VAGRANTFILES= 00:01:30.330 NIC_MODEL=e1000 00:01:30.330 00:01:30.330 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:30.330 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:32.876 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.137 ==> default: Creating image (snapshot of base box volume). 00:01:33.137 ==> default: Creating domain with the following settings... 00:01:33.137 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732733982_f1888b05f2948783fe27 00:01:33.137 ==> default: -- Domain type: kvm 00:01:33.137 ==> default: -- Cpus: 10 00:01:33.137 ==> default: -- Feature: acpi 00:01:33.137 ==> default: -- Feature: apic 00:01:33.137 ==> default: -- Feature: pae 00:01:33.137 ==> default: -- Memory: 12288M 00:01:33.137 ==> default: -- Memory Backing: hugepages: 00:01:33.137 ==> default: -- Management MAC: 00:01:33.137 ==> default: -- Loader: 00:01:33.137 ==> default: -- Nvram: 00:01:33.137 ==> default: -- Base box: spdk/fedora39 00:01:33.137 ==> default: -- Storage pool: default 00:01:33.137 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732733982_f1888b05f2948783fe27.img (20G) 00:01:33.137 ==> default: -- Volume Cache: default 00:01:33.137 ==> default: -- Kernel: 00:01:33.138 ==> default: -- Initrd: 00:01:33.138 ==> default: -- Graphics Type: vnc 00:01:33.138 ==> default: -- Graphics Port: -1 00:01:33.138 ==> default: -- Graphics IP: 127.0.0.1 00:01:33.138 ==> default: -- Graphics Password: Not defined 00:01:33.138 ==> default: -- Video Type: cirrus 00:01:33.138 ==> default: -- Video VRAM: 9216 00:01:33.138 ==> default: -- Sound Type: 00:01:33.138 ==> default: -- Keymap: en-us 00:01:33.138 ==> default: -- TPM Path: 00:01:33.138 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:33.138 ==> default: -- Command line args: 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:33.138 ==> default: -> value=-drive, 00:01:33.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:33.138 ==> default: -> value=-device, 00:01:33.138 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.399 ==> default: Creating shared folders metadata... 00:01:33.399 ==> default: Starting domain. 00:01:34.783 ==> default: Waiting for domain to get an IP address... 00:01:52.934 ==> default: Waiting for SSH to become available... 00:01:52.934 ==> default: Configuring and enabling network interfaces... 00:01:55.501 default: SSH address: 192.168.121.161:22 00:01:55.501 default: SSH username: vagrant 00:01:55.501 default: SSH auth method: private key 00:01:58.050 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.189 ==> default: Mounting SSHFS shared folder... 00:02:08.104 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.104 ==> default: Checking Mount.. 00:02:09.184 ==> default: Folder Successfully Mounted! 00:02:09.184 00:02:09.184 SUCCESS! 00:02:09.184 00:02:09.184 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:09.184 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:09.184 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:09.184 00:02:09.194 [Pipeline] } 00:02:09.206 [Pipeline] // stage 00:02:09.213 [Pipeline] dir 00:02:09.214 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:09.215 [Pipeline] { 00:02:09.226 [Pipeline] catchError 00:02:09.228 [Pipeline] { 00:02:09.238 [Pipeline] sh 00:02:09.519 + vagrant ssh-config --host vagrant 00:02:09.519 + sed -ne '/^Host/,$p' 00:02:09.519 + tee ssh_conf 00:02:12.066 Host vagrant 00:02:12.066 HostName 192.168.121.161 00:02:12.066 User vagrant 00:02:12.066 Port 22 00:02:12.066 UserKnownHostsFile /dev/null 00:02:12.066 StrictHostKeyChecking no 00:02:12.066 PasswordAuthentication no 00:02:12.066 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:12.066 IdentitiesOnly yes 00:02:12.066 LogLevel FATAL 00:02:12.066 ForwardAgent yes 00:02:12.066 ForwardX11 yes 00:02:12.066 00:02:12.081 [Pipeline] withEnv 00:02:12.083 [Pipeline] { 00:02:12.096 [Pipeline] sh 00:02:12.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:12.378 source /etc/os-release 00:02:12.378 [[ -e /image.version ]] && img=$(< /image.version) 00:02:12.378 # Minimal, systemd-like check. 00:02:12.378 if [[ -e /.dockerenv ]]; then 00:02:12.378 # Clear garbage from the node'\''s name: 00:02:12.378 # agt-er_autotest_547-896 -> autotest_547-896 00:02:12.378 # $HOSTNAME is the actual container id 00:02:12.378 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:12.378 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:12.378 # We can assume this is a mount from a host where container is running, 00:02:12.378 # so fetch its hostname to easily identify the target swarm worker. 00:02:12.378 container="$(< /etc/hostname) ($agent)" 00:02:12.378 else 00:02:12.378 # Fallback 00:02:12.378 container=$agent 00:02:12.378 fi 00:02:12.378 fi 00:02:12.378 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:12.378 ' 00:02:12.652 [Pipeline] } 00:02:12.672 [Pipeline] // withEnv 00:02:12.680 [Pipeline] setCustomBuildProperty 00:02:12.696 [Pipeline] stage 00:02:12.699 [Pipeline] { (Tests) 00:02:12.720 [Pipeline] sh 00:02:13.003 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:13.279 [Pipeline] sh 00:02:13.562 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:13.842 [Pipeline] timeout 00:02:13.843 Timeout set to expire in 50 min 00:02:13.844 [Pipeline] { 00:02:13.859 [Pipeline] sh 00:02:14.141 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:14.715 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:14.730 [Pipeline] sh 00:02:15.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:15.291 [Pipeline] sh 00:02:15.576 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:15.853 [Pipeline] sh 00:02:16.138 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:16.399 ++ readlink -f spdk_repo 00:02:16.400 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:16.400 + [[ -n /home/vagrant/spdk_repo ]] 00:02:16.400 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:16.400 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:16.400 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:16.400 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:16.400 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:16.400 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:16.400 + cd /home/vagrant/spdk_repo 00:02:16.400 + source /etc/os-release 00:02:16.400 ++ NAME='Fedora Linux' 00:02:16.400 ++ VERSION='39 (Cloud Edition)' 00:02:16.400 ++ ID=fedora 00:02:16.400 ++ VERSION_ID=39 00:02:16.400 ++ VERSION_CODENAME= 00:02:16.400 ++ PLATFORM_ID=platform:f39 00:02:16.400 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:16.400 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:16.400 ++ LOGO=fedora-logo-icon 00:02:16.400 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:16.400 ++ HOME_URL=https://fedoraproject.org/ 00:02:16.400 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:16.400 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:16.400 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:16.400 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:16.400 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:16.400 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:16.400 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:16.400 ++ SUPPORT_END=2024-11-12 00:02:16.400 ++ VARIANT='Cloud Edition' 00:02:16.400 ++ VARIANT_ID=cloud 00:02:16.400 + uname -a 00:02:16.400 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:16.400 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:16.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:16.922 Hugepages 00:02:16.923 node hugesize free / total 00:02:16.923 node0 1048576kB 0 / 0 00:02:16.923 node0 2048kB 0 / 0 00:02:16.923 00:02:16.923 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.923 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:16.923 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:17.185 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:17.185 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:17.185 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:17.185 + rm -f /tmp/spdk-ld-path 00:02:17.185 + source autorun-spdk.conf 00:02:17.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.185 ++ SPDK_TEST_NVME=1 00:02:17.185 ++ SPDK_TEST_FTL=1 00:02:17.185 ++ SPDK_TEST_ISAL=1 00:02:17.185 ++ SPDK_RUN_ASAN=1 00:02:17.185 ++ SPDK_RUN_UBSAN=1 00:02:17.185 ++ SPDK_TEST_XNVME=1 00:02:17.185 ++ SPDK_TEST_NVME_FDP=1 00:02:17.185 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.185 ++ RUN_NIGHTLY=1 00:02:17.185 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.185 + [[ -n '' ]] 00:02:17.185 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:17.185 + for M in /var/spdk/build-*-manifest.txt 00:02:17.185 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.185 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.185 + for M in /var/spdk/build-*-manifest.txt 00:02:17.185 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.185 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.185 + for M in /var/spdk/build-*-manifest.txt 00:02:17.185 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.185 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.185 ++ uname 00:02:17.185 + [[ Linux == \L\i\n\u\x ]] 00:02:17.185 + sudo dmesg -T 00:02:17.185 + sudo dmesg --clear 00:02:17.185 + dmesg_pid=5027 00:02:17.185 + [[ Fedora Linux == FreeBSD ]] 00:02:17.185 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.185 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.185 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.185 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.185 + sudo dmesg -Tw 00:02:17.185 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.185 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.185 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.185 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.185 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.185 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.185 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.185 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.185 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.185 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.185 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.447 19:00:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:17.447 19:00:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.447 19:00:26 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:17.447 19:00:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:17.447 19:00:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.447 19:00:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:17.447 19:00:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:17.447 19:00:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:17.447 19:00:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:17.447 19:00:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.447 19:00:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.447 19:00:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.447 19:00:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.448 19:00:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.448 19:00:26 -- paths/export.sh@5 -- $ export PATH 00:02:17.448 19:00:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.448 19:00:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:17.448 19:00:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:17.448 19:00:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732734026.XXXXXX 00:02:17.448 19:00:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732734026.SAPSu3 00:02:17.448 19:00:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:17.448 19:00:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:17.448 19:00:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:17.448 19:00:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:17.448 19:00:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:17.448 19:00:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:17.448 19:00:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:17.448 19:00:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.448 19:00:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:17.448 19:00:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:17.448 19:00:26 -- pm/common@17 -- $ local monitor 00:02:17.448 19:00:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.448 19:00:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.448 19:00:26 -- pm/common@25 -- $ sleep 1 00:02:17.448 19:00:26 -- pm/common@21 -- $ date +%s 00:02:17.448 19:00:26 -- pm/common@21 -- $ date +%s 00:02:17.448 19:00:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732734026 00:02:17.448 19:00:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732734026 00:02:17.448 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732734026_collect-cpu-load.pm.log 00:02:17.448 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732734026_collect-vmstat.pm.log 00:02:18.392 19:00:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:18.392 19:00:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:18.392 19:00:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:18.392 19:00:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:18.392 19:00:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:18.392 Wed Nov 27 07:00:27 PM UTC 2024 00:02:18.392 19:00:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:18.392 v25.01-pre-276-g35cd3e84d 00:02:18.392 19:00:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:18.392 19:00:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:18.392 19:00:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:18.392 19:00:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:18.392 19:00:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.392 ************************************ 00:02:18.392 START TEST asan 00:02:18.392 ************************************ 00:02:18.392 using asan 00:02:18.392 19:00:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:18.392 00:02:18.392 real 0m0.000s 00:02:18.392 user 0m0.000s 00:02:18.392 sys 0m0.000s 00:02:18.392 ************************************ 00:02:18.392 END TEST asan 00:02:18.392 ************************************ 00:02:18.392 19:00:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.392 19:00:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.654 19:00:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:18.654 19:00:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:18.654 19:00:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:18.654 19:00:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:18.654 19:00:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.654 ************************************ 00:02:18.654 START TEST ubsan 00:02:18.654 ************************************ 00:02:18.654 using ubsan 00:02:18.654 19:00:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:18.654 00:02:18.654 real 0m0.000s 00:02:18.654 user 0m0.000s 00:02:18.654 sys 0m0.000s 00:02:18.654 ************************************ 00:02:18.654 END TEST ubsan 00:02:18.654 ************************************ 00:02:18.654 19:00:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.654 19:00:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.654 19:00:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:18.654 19:00:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.654 19:00:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.654 19:00:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:18.654 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:18.654 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:19.227 Using 'verbs' RDMA provider 00:02:32.436 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:42.404 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:42.404 Creating mk/config.mk...done. 00:02:42.404 Creating mk/cc.flags.mk...done. 00:02:42.404 Type 'make' to build. 00:02:42.404 19:00:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:42.404 19:00:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:42.404 19:00:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:42.404 19:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.404 ************************************ 00:02:42.404 START TEST make 00:02:42.404 ************************************ 00:02:42.404 19:00:51 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:42.404 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:42.404 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:42.404 meson setup builddir \ 00:02:42.404 -Dwith-libaio=enabled \ 00:02:42.404 -Dwith-liburing=enabled \ 00:02:42.404 -Dwith-libvfn=disabled \ 00:02:42.404 -Dwith-spdk=disabled \ 00:02:42.404 -Dexamples=false \ 00:02:42.404 -Dtests=false \ 00:02:42.404 -Dtools=false && \ 00:02:42.404 meson compile -C builddir && \ 00:02:42.404 cd -) 00:02:42.404 make[1]: Nothing to be done for 'all'. 00:02:44.333 The Meson build system 00:02:44.333 Version: 1.5.0 00:02:44.333 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:44.333 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:44.333 Build type: native build 00:02:44.333 Project name: xnvme 00:02:44.333 Project version: 0.7.5 00:02:44.333 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.333 C linker for the host machine: cc ld.bfd 2.40-14 00:02:44.333 Host machine cpu family: x86_64 00:02:44.333 Host machine cpu: x86_64 00:02:44.333 Message: host_machine.system: linux 00:02:44.333 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:44.333 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:44.333 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.333 Run-time dependency threads found: YES 00:02:44.334 Has header "setupapi.h" : NO 00:02:44.334 Has header "linux/blkzoned.h" : YES 00:02:44.334 Has header "linux/blkzoned.h" : YES (cached) 00:02:44.334 Has header "libaio.h" : YES 00:02:44.334 Library aio found: YES 00:02:44.334 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.334 Run-time dependency liburing found: YES 2.2 00:02:44.334 Dependency libvfn skipped: feature with-libvfn disabled 00:02:44.334 Found CMake: /usr/bin/cmake (3.27.7) 00:02:44.334 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:44.334 Subproject spdk : skipped: feature with-spdk disabled 00:02:44.334 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.334 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.334 Library rt found: YES 00:02:44.334 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:44.334 Configuring xnvme_config.h using configuration 00:02:44.334 Configuring xnvme.spec using configuration 00:02:44.334 Run-time dependency bash-completion found: YES 2.11 00:02:44.334 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:44.334 Program cp found: YES (/usr/bin/cp) 00:02:44.334 Build targets in project: 3 00:02:44.334 00:02:44.334 xnvme 0.7.5 00:02:44.334 00:02:44.334 Subprojects 00:02:44.334 spdk : NO Feature 'with-spdk' disabled 00:02:44.334 00:02:44.334 User defined options 00:02:44.334 examples : false 00:02:44.334 tests : false 00:02:44.334 tools : false 00:02:44.334 with-libaio : enabled 00:02:44.334 with-liburing: enabled 00:02:44.334 with-libvfn : disabled 00:02:44.334 with-spdk : disabled 00:02:44.334 00:02:44.334 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.593 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:44.593 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:44.593 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:44.593 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:44.593 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:44.593 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:44.593 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:44.593 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:44.593 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:44.593 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:44.593 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:44.593 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:44.593 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:44.593 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:44.593 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:44.593 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:44.593 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:44.852 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:44.852 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:44.852 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:44.852 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:44.852 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:44.852 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:44.852 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:44.852 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:44.852 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:44.852 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:44.852 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:44.852 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:44.852 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:44.852 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:44.852 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:44.852 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:44.852 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:44.852 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:44.852 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:44.852 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:44.852 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:44.852 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:44.852 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:44.852 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:44.852 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:44.852 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:44.852 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:44.852 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:44.852 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:44.852 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:44.852 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:44.852 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:44.852 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:44.852 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:44.852 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:44.852 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:44.852 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:45.112 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:45.112 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:45.112 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:45.112 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:45.112 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:45.112 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:45.112 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:45.112 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:45.112 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:45.112 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:45.112 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:45.112 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:45.112 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:45.112 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:45.112 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:45.112 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:45.112 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:45.112 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:45.370 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:45.370 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:45.628 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:45.628 [75/76] Linking static target lib/libxnvme.a 00:02:45.628 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:45.628 INFO: autodetecting backend as ninja 00:02:45.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.628 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:52.193 The Meson build system 00:02:52.193 Version: 1.5.0 00:02:52.193 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.193 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.193 Build type: native build 00:02:52.193 Program cat found: YES (/usr/bin/cat) 00:02:52.193 Project name: DPDK 00:02:52.193 Project version: 24.03.0 00:02:52.193 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.193 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.193 Host machine cpu family: x86_64 00:02:52.193 Host machine cpu: x86_64 00:02:52.193 Message: ## Building in Developer Mode ## 00:02:52.193 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.193 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.193 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.193 Program python3 found: YES (/usr/bin/python3) 00:02:52.193 Program cat found: YES (/usr/bin/cat) 00:02:52.193 Compiler for C supports arguments -march=native: YES 00:02:52.193 Checking for size of "void *" : 8 00:02:52.193 Checking for size of "void *" : 8 (cached) 00:02:52.193 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.193 Library m found: YES 00:02:52.193 Library numa found: YES 00:02:52.193 Has header "numaif.h" : YES 00:02:52.193 Library fdt found: NO 00:02:52.193 Library execinfo found: NO 00:02:52.193 Has header "execinfo.h" : YES 00:02:52.193 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.193 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.193 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.193 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.193 Run-time dependency openssl found: YES 3.1.1 00:02:52.193 Run-time dependency libpcap found: YES 1.10.4 00:02:52.193 Has header "pcap.h" with dependency libpcap: YES 00:02:52.193 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.193 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.193 Compiler for C supports arguments -Wformat: YES 00:02:52.193 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.193 Compiler for C supports arguments -Wformat-security: NO 00:02:52.193 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.193 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.193 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.193 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.193 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.193 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.193 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.193 Compiler for C supports arguments -Wundef: YES 00:02:52.193 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.193 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.193 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.193 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.193 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.193 Program objdump found: YES (/usr/bin/objdump) 00:02:52.193 Compiler for C supports arguments -mavx512f: YES 00:02:52.193 Checking if "AVX512 checking" compiles: YES 00:02:52.193 Fetching value of define "__SSE4_2__" : 1 00:02:52.193 Fetching value of define "__AES__" : 1 00:02:52.193 Fetching value of define "__AVX__" : 1 00:02:52.193 Fetching value of define "__AVX2__" : 1 00:02:52.193 Fetching value of define "__AVX512BW__" : 1 00:02:52.193 Fetching value of define "__AVX512CD__" : 1 00:02:52.193 Fetching value of define "__AVX512DQ__" : 1 00:02:52.193 Fetching value of define "__AVX512F__" : 1 00:02:52.193 Fetching value of define "__AVX512VL__" : 1 00:02:52.193 Fetching value of define "__PCLMUL__" : 1 00:02:52.193 Fetching value of define "__RDRND__" : 1 00:02:52.193 Fetching value of define "__RDSEED__" : 1 00:02:52.193 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:52.193 Fetching value of define "__znver1__" : (undefined) 00:02:52.193 Fetching value of define "__znver2__" : (undefined) 00:02:52.193 Fetching value of define "__znver3__" : (undefined) 00:02:52.193 Fetching value of define "__znver4__" : (undefined) 00:02:52.193 Library asan found: YES 00:02:52.193 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.193 Message: lib/log: Defining dependency "log" 00:02:52.193 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.193 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.193 Library rt found: YES 00:02:52.193 Checking for function "getentropy" : NO 00:02:52.193 Message: lib/eal: Defining dependency "eal" 00:02:52.193 Message: lib/ring: Defining dependency "ring" 00:02:52.193 Message: lib/rcu: Defining dependency "rcu" 00:02:52.193 Message: lib/mempool: Defining dependency "mempool" 00:02:52.193 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.193 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.193 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.193 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.193 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.193 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.193 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:52.193 Compiler for C supports arguments -mpclmul: YES 00:02:52.193 Compiler for C supports arguments -maes: YES 00:02:52.193 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.193 Compiler for C supports arguments -mavx512bw: YES 00:02:52.193 Compiler for C supports arguments -mavx512dq: YES 00:02:52.193 Compiler for C supports arguments -mavx512vl: YES 00:02:52.193 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.193 Compiler for C supports arguments -mavx2: YES 00:02:52.193 Compiler for C supports arguments -mavx: YES 00:02:52.193 Message: lib/net: Defining dependency "net" 00:02:52.193 Message: lib/meter: Defining dependency "meter" 00:02:52.193 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.193 Message: lib/pci: Defining dependency "pci" 00:02:52.193 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.193 Message: lib/hash: Defining dependency "hash" 00:02:52.193 Message: lib/timer: Defining dependency "timer" 00:02:52.193 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.193 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.193 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.193 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.193 Message: lib/power: Defining dependency "power" 00:02:52.193 Message: lib/reorder: Defining dependency "reorder" 00:02:52.193 Message: lib/security: Defining dependency "security" 00:02:52.193 Has header "linux/userfaultfd.h" : YES 00:02:52.194 Has header "linux/vduse.h" : YES 00:02:52.194 Message: lib/vhost: Defining dependency "vhost" 00:02:52.194 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.194 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.194 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.194 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.194 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.194 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.194 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.194 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.194 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.194 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.194 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.194 Configuring doxy-api-html.conf using configuration 00:02:52.194 Configuring doxy-api-man.conf using configuration 00:02:52.194 Program mandb found: YES (/usr/bin/mandb) 00:02:52.194 Program sphinx-build found: NO 00:02:52.194 Configuring rte_build_config.h using configuration 00:02:52.194 Message: 00:02:52.194 ================= 00:02:52.194 Applications Enabled 00:02:52.194 ================= 00:02:52.194 00:02:52.194 apps: 00:02:52.194 00:02:52.194 00:02:52.194 Message: 00:02:52.194 ================= 00:02:52.194 Libraries Enabled 00:02:52.194 ================= 00:02:52.194 00:02:52.194 libs: 00:02:52.194 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.194 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.194 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.194 00:02:52.194 Message: 00:02:52.194 =============== 00:02:52.194 Drivers Enabled 00:02:52.194 =============== 00:02:52.194 00:02:52.194 common: 00:02:52.194 00:02:52.194 bus: 00:02:52.194 pci, vdev, 00:02:52.194 mempool: 00:02:52.194 ring, 00:02:52.194 dma: 00:02:52.194 00:02:52.194 net: 00:02:52.194 00:02:52.194 crypto: 00:02:52.194 00:02:52.194 compress: 00:02:52.194 00:02:52.194 vdpa: 00:02:52.194 00:02:52.194 00:02:52.194 Message: 00:02:52.194 ================= 00:02:52.194 Content Skipped 00:02:52.194 ================= 00:02:52.194 00:02:52.194 apps: 00:02:52.194 dumpcap: explicitly disabled via build config 00:02:52.194 graph: explicitly disabled via build config 00:02:52.194 pdump: explicitly disabled via build config 00:02:52.194 proc-info: explicitly disabled via build config 00:02:52.194 test-acl: explicitly disabled via build config 00:02:52.194 test-bbdev: explicitly disabled via build config 00:02:52.194 test-cmdline: explicitly disabled via build config 00:02:52.194 test-compress-perf: explicitly disabled via build config 00:02:52.194 test-crypto-perf: explicitly disabled via build config 00:02:52.194 test-dma-perf: explicitly disabled via build config 00:02:52.194 test-eventdev: explicitly disabled via build config 00:02:52.194 test-fib: explicitly disabled via build config 00:02:52.194 test-flow-perf: explicitly disabled via build config 00:02:52.194 test-gpudev: explicitly disabled via build config 00:02:52.194 test-mldev: explicitly disabled via build config 00:02:52.194 test-pipeline: explicitly disabled via build config 00:02:52.194 test-pmd: explicitly disabled via build config 00:02:52.194 test-regex: explicitly disabled via build config 00:02:52.194 test-sad: explicitly disabled via build config 00:02:52.194 test-security-perf: explicitly disabled via build config 00:02:52.194 00:02:52.194 libs: 00:02:52.194 argparse: explicitly disabled via build config 00:02:52.194 metrics: explicitly disabled via build config 00:02:52.194 acl: explicitly disabled via build config 00:02:52.194 bbdev: explicitly disabled via build config 00:02:52.194 bitratestats: explicitly disabled via build config 00:02:52.194 bpf: explicitly disabled via build config 00:02:52.194 cfgfile: explicitly disabled via build config 00:02:52.194 distributor: explicitly disabled via build config 00:02:52.194 efd: explicitly disabled via build config 00:02:52.194 eventdev: explicitly disabled via build config 00:02:52.194 dispatcher: explicitly disabled via build config 00:02:52.194 gpudev: explicitly disabled via build config 00:02:52.194 gro: explicitly disabled via build config 00:02:52.194 gso: explicitly disabled via build config 00:02:52.194 ip_frag: explicitly disabled via build config 00:02:52.194 jobstats: explicitly disabled via build config 00:02:52.194 latencystats: explicitly disabled via build config 00:02:52.194 lpm: explicitly disabled via build config 00:02:52.194 member: explicitly disabled via build config 00:02:52.194 pcapng: explicitly disabled via build config 00:02:52.194 rawdev: explicitly disabled via build config 00:02:52.194 regexdev: explicitly disabled via build config 00:02:52.194 mldev: explicitly disabled via build config 00:02:52.194 rib: explicitly disabled via build config 00:02:52.194 sched: explicitly disabled via build config 00:02:52.194 stack: explicitly disabled via build config 00:02:52.194 ipsec: explicitly disabled via build config 00:02:52.194 pdcp: explicitly disabled via build config 00:02:52.194 fib: explicitly disabled via build config 00:02:52.194 port: explicitly disabled via build config 00:02:52.194 pdump: explicitly disabled via build config 00:02:52.194 table: explicitly disabled via build config 00:02:52.194 pipeline: explicitly disabled via build config 00:02:52.194 graph: explicitly disabled via build config 00:02:52.194 node: explicitly disabled via build config 00:02:52.194 00:02:52.194 drivers: 00:02:52.194 common/cpt: not in enabled drivers build config 00:02:52.194 common/dpaax: not in enabled drivers build config 00:02:52.194 common/iavf: not in enabled drivers build config 00:02:52.194 common/idpf: not in enabled drivers build config 00:02:52.194 common/ionic: not in enabled drivers build config 00:02:52.194 common/mvep: not in enabled drivers build config 00:02:52.194 common/octeontx: not in enabled drivers build config 00:02:52.194 bus/auxiliary: not in enabled drivers build config 00:02:52.194 bus/cdx: not in enabled drivers build config 00:02:52.194 bus/dpaa: not in enabled drivers build config 00:02:52.194 bus/fslmc: not in enabled drivers build config 00:02:52.194 bus/ifpga: not in enabled drivers build config 00:02:52.194 bus/platform: not in enabled drivers build config 00:02:52.194 bus/uacce: not in enabled drivers build config 00:02:52.194 bus/vmbus: not in enabled drivers build config 00:02:52.194 common/cnxk: not in enabled drivers build config 00:02:52.194 common/mlx5: not in enabled drivers build config 00:02:52.194 common/nfp: not in enabled drivers build config 00:02:52.194 common/nitrox: not in enabled drivers build config 00:02:52.194 common/qat: not in enabled drivers build config 00:02:52.194 common/sfc_efx: not in enabled drivers build config 00:02:52.194 mempool/bucket: not in enabled drivers build config 00:02:52.194 mempool/cnxk: not in enabled drivers build config 00:02:52.194 mempool/dpaa: not in enabled drivers build config 00:02:52.194 mempool/dpaa2: not in enabled drivers build config 00:02:52.194 mempool/octeontx: not in enabled drivers build config 00:02:52.194 mempool/stack: not in enabled drivers build config 00:02:52.195 dma/cnxk: not in enabled drivers build config 00:02:52.195 dma/dpaa: not in enabled drivers build config 00:02:52.195 dma/dpaa2: not in enabled drivers build config 00:02:52.195 dma/hisilicon: not in enabled drivers build config 00:02:52.195 dma/idxd: not in enabled drivers build config 00:02:52.195 dma/ioat: not in enabled drivers build config 00:02:52.195 dma/skeleton: not in enabled drivers build config 00:02:52.195 net/af_packet: not in enabled drivers build config 00:02:52.195 net/af_xdp: not in enabled drivers build config 00:02:52.195 net/ark: not in enabled drivers build config 00:02:52.195 net/atlantic: not in enabled drivers build config 00:02:52.195 net/avp: not in enabled drivers build config 00:02:52.195 net/axgbe: not in enabled drivers build config 00:02:52.195 net/bnx2x: not in enabled drivers build config 00:02:52.195 net/bnxt: not in enabled drivers build config 00:02:52.195 net/bonding: not in enabled drivers build config 00:02:52.195 net/cnxk: not in enabled drivers build config 00:02:52.195 net/cpfl: not in enabled drivers build config 00:02:52.195 net/cxgbe: not in enabled drivers build config 00:02:52.195 net/dpaa: not in enabled drivers build config 00:02:52.195 net/dpaa2: not in enabled drivers build config 00:02:52.195 net/e1000: not in enabled drivers build config 00:02:52.195 net/ena: not in enabled drivers build config 00:02:52.195 net/enetc: not in enabled drivers build config 00:02:52.195 net/enetfec: not in enabled drivers build config 00:02:52.195 net/enic: not in enabled drivers build config 00:02:52.195 net/failsafe: not in enabled drivers build config 00:02:52.195 net/fm10k: not in enabled drivers build config 00:02:52.195 net/gve: not in enabled drivers build config 00:02:52.195 net/hinic: not in enabled drivers build config 00:02:52.195 net/hns3: not in enabled drivers build config 00:02:52.195 net/i40e: not in enabled drivers build config 00:02:52.195 net/iavf: not in enabled drivers build config 00:02:52.195 net/ice: not in enabled drivers build config 00:02:52.195 net/idpf: not in enabled drivers build config 00:02:52.195 net/igc: not in enabled drivers build config 00:02:52.195 net/ionic: not in enabled drivers build config 00:02:52.195 net/ipn3ke: not in enabled drivers build config 00:02:52.195 net/ixgbe: not in enabled drivers build config 00:02:52.195 net/mana: not in enabled drivers build config 00:02:52.195 net/memif: not in enabled drivers build config 00:02:52.195 net/mlx4: not in enabled drivers build config 00:02:52.195 net/mlx5: not in enabled drivers build config 00:02:52.195 net/mvneta: not in enabled drivers build config 00:02:52.195 net/mvpp2: not in enabled drivers build config 00:02:52.195 net/netvsc: not in enabled drivers build config 00:02:52.195 net/nfb: not in enabled drivers build config 00:02:52.195 net/nfp: not in enabled drivers build config 00:02:52.195 net/ngbe: not in enabled drivers build config 00:02:52.195 net/null: not in enabled drivers build config 00:02:52.195 net/octeontx: not in enabled drivers build config 00:02:52.195 net/octeon_ep: not in enabled drivers build config 00:02:52.195 net/pcap: not in enabled drivers build config 00:02:52.195 net/pfe: not in enabled drivers build config 00:02:52.195 net/qede: not in enabled drivers build config 00:02:52.195 net/ring: not in enabled drivers build config 00:02:52.195 net/sfc: not in enabled drivers build config 00:02:52.195 net/softnic: not in enabled drivers build config 00:02:52.195 net/tap: not in enabled drivers build config 00:02:52.195 net/thunderx: not in enabled drivers build config 00:02:52.195 net/txgbe: not in enabled drivers build config 00:02:52.195 net/vdev_netvsc: not in enabled drivers build config 00:02:52.195 net/vhost: not in enabled drivers build config 00:02:52.195 net/virtio: not in enabled drivers build config 00:02:52.195 net/vmxnet3: not in enabled drivers build config 00:02:52.195 raw/*: missing internal dependency, "rawdev" 00:02:52.195 crypto/armv8: not in enabled drivers build config 00:02:52.195 crypto/bcmfs: not in enabled drivers build config 00:02:52.195 crypto/caam_jr: not in enabled drivers build config 00:02:52.195 crypto/ccp: not in enabled drivers build config 00:02:52.195 crypto/cnxk: not in enabled drivers build config 00:02:52.195 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.195 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.195 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.195 crypto/mlx5: not in enabled drivers build config 00:02:52.195 crypto/mvsam: not in enabled drivers build config 00:02:52.195 crypto/nitrox: not in enabled drivers build config 00:02:52.195 crypto/null: not in enabled drivers build config 00:02:52.195 crypto/octeontx: not in enabled drivers build config 00:02:52.195 crypto/openssl: not in enabled drivers build config 00:02:52.195 crypto/scheduler: not in enabled drivers build config 00:02:52.195 crypto/uadk: not in enabled drivers build config 00:02:52.195 crypto/virtio: not in enabled drivers build config 00:02:52.195 compress/isal: not in enabled drivers build config 00:02:52.195 compress/mlx5: not in enabled drivers build config 00:02:52.195 compress/nitrox: not in enabled drivers build config 00:02:52.195 compress/octeontx: not in enabled drivers build config 00:02:52.195 compress/zlib: not in enabled drivers build config 00:02:52.195 regex/*: missing internal dependency, "regexdev" 00:02:52.195 ml/*: missing internal dependency, "mldev" 00:02:52.195 vdpa/ifc: not in enabled drivers build config 00:02:52.195 vdpa/mlx5: not in enabled drivers build config 00:02:52.195 vdpa/nfp: not in enabled drivers build config 00:02:52.195 vdpa/sfc: not in enabled drivers build config 00:02:52.195 event/*: missing internal dependency, "eventdev" 00:02:52.195 baseband/*: missing internal dependency, "bbdev" 00:02:52.195 gpu/*: missing internal dependency, "gpudev" 00:02:52.195 00:02:52.195 00:02:52.195 Build targets in project: 84 00:02:52.195 00:02:52.195 DPDK 24.03.0 00:02:52.195 00:02:52.195 User defined options 00:02:52.195 buildtype : debug 00:02:52.195 default_library : shared 00:02:52.195 libdir : lib 00:02:52.195 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.195 b_sanitize : address 00:02:52.195 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.195 c_link_args : 00:02:52.195 cpu_instruction_set: native 00:02:52.195 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.195 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.195 enable_docs : false 00:02:52.195 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:52.195 enable_kmods : false 00:02:52.195 max_lcores : 128 00:02:52.195 tests : false 00:02:52.195 00:02:52.195 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.454 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.454 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.454 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.454 [3/267] Linking static target lib/librte_kvargs.a 00:02:52.454 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.454 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.454 [6/267] Linking static target lib/librte_log.a 00:02:52.713 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.713 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.713 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.713 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.713 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.971 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.971 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.971 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.971 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.971 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.971 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.971 [18/267] Linking static target lib/librte_telemetry.a 00:02:53.230 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.230 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.230 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.230 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.230 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.230 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.230 [25/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.488 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.488 [27/267] Linking target lib/librte_log.so.24.1 00:02:53.488 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.488 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.488 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.488 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.747 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.747 [33/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.747 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:53.747 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.747 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.747 [37/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.747 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.747 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.747 [40/267] Linking target lib/librte_telemetry.so.24.1 00:02:53.747 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.747 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.005 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.005 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.005 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.005 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.005 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.005 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.264 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.264 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.264 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.264 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.264 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.523 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.523 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.523 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.523 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.523 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.523 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.523 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.523 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.523 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.792 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.792 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.792 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.792 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.792 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.051 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.051 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.051 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.051 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.051 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.051 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.051 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.051 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.051 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.309 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.309 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.309 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.568 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.568 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.568 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.568 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.568 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.568 [85/267] Linking static target lib/librte_eal.a 00:02:55.568 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.568 [87/267] Linking static target lib/librte_ring.a 00:02:55.825 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:55.825 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.825 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.825 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.825 [92/267] Linking static target lib/librte_mempool.a 00:02:55.825 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.083 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.083 [95/267] Linking static target lib/librte_rcu.a 00:02:56.083 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.083 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.083 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.342 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.342 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:56.342 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:56.342 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.342 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.342 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:56.600 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:56.600 [106/267] Linking static target lib/librte_meter.a 00:02:56.600 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:56.600 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.600 [109/267] Linking static target lib/librte_net.a 00:02:56.600 [110/267] Linking static target lib/librte_mbuf.a 00:02:56.600 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:56.600 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:56.858 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.859 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.859 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.859 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:56.859 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.859 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.117 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.376 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.376 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.376 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.376 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:57.635 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.635 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.635 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.635 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.635 [128/267] Linking static target lib/librte_pci.a 00:02:57.635 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.635 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.635 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.894 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.894 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.894 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.894 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.894 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.894 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.894 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.894 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.894 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.894 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.894 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.894 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.894 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.154 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.154 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.154 [147/267] Linking static target lib/librte_cmdline.a 00:02:58.154 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.154 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:58.413 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.413 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.413 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.413 [153/267] Linking static target lib/librte_timer.a 00:02:58.671 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.671 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.671 [156/267] Linking static target lib/librte_compressdev.a 00:02:58.672 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.672 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.672 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.672 [160/267] Linking static target lib/librte_hash.a 00:02:58.930 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.930 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.930 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.930 [164/267] Linking static target lib/librte_ethdev.a 00:02:58.930 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.930 [166/267] Linking static target lib/librte_dmadev.a 00:02:59.189 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.189 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.189 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.189 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.189 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.448 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.448 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.448 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.448 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.718 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.718 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:59.718 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.718 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:59.718 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.718 [181/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.718 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.718 [183/267] Linking static target lib/librte_cryptodev.a 00:02:59.718 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.981 [185/267] Linking static target lib/librte_power.a 00:02:59.981 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:59.981 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:59.981 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.981 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.240 [190/267] Linking static target lib/librte_reorder.a 00:03:00.240 [191/267] Linking static target lib/librte_security.a 00:03:00.240 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.498 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.498 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.758 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.758 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.758 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.758 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:00.758 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.017 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.017 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.017 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.277 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.277 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.277 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.277 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.277 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.536 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.536 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.536 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.536 [211/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.536 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.536 [213/267] Linking static target drivers/librte_bus_vdev.a 00:03:01.536 [214/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.536 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.536 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.536 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.795 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.795 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.795 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:01.795 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.795 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.795 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.795 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:01.795 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.053 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.312 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:03.689 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.689 [229/267] Linking target lib/librte_eal.so.24.1 00:03:03.689 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.689 [231/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.689 [232/267] Linking target lib/librte_timer.so.24.1 00:03:03.689 [233/267] Linking target lib/librte_dmadev.so.24.1 00:03:03.689 [234/267] Linking target lib/librte_ring.so.24.1 00:03:03.689 [235/267] Linking target lib/librte_meter.so.24.1 00:03:03.689 [236/267] Linking target lib/librte_pci.so.24.1 00:03:03.689 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.689 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.689 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.689 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.689 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.689 [242/267] Linking target lib/librte_mempool.so.24.1 00:03:03.689 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:03.689 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.948 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.948 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.948 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:03.948 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:03.948 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:03.948 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:03.948 [251/267] Linking target lib/librte_net.so.24.1 00:03:03.948 [252/267] Linking target lib/librte_reorder.so.24.1 00:03:04.207 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:04.207 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.207 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.207 [256/267] Linking target lib/librte_hash.so.24.1 00:03:04.207 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:04.207 [258/267] Linking target lib/librte_security.so.24.1 00:03:04.207 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.466 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.466 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:04.725 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:04.725 [263/267] Linking target lib/librte_power.so.24.1 00:03:05.292 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.292 [265/267] Linking static target lib/librte_vhost.a 00:03:06.784 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.784 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:06.784 INFO: autodetecting backend as ninja 00:03:06.784 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:21.656 CC lib/log/log_deprecated.o 00:03:21.656 CC lib/ut/ut.o 00:03:21.656 CC lib/log/log_flags.o 00:03:21.656 CC lib/log/log.o 00:03:21.656 CC lib/ut_mock/mock.o 00:03:21.656 LIB libspdk_ut.a 00:03:21.914 LIB libspdk_log.a 00:03:21.914 SO libspdk_ut.so.2.0 00:03:21.914 LIB libspdk_ut_mock.a 00:03:21.914 SO libspdk_log.so.7.1 00:03:21.914 SO libspdk_ut_mock.so.6.0 00:03:21.914 SYMLINK libspdk_ut.so 00:03:21.914 SYMLINK libspdk_ut_mock.so 00:03:21.914 SYMLINK libspdk_log.so 00:03:22.172 CXX lib/trace_parser/trace.o 00:03:22.172 CC lib/util/base64.o 00:03:22.172 CC lib/util/bit_array.o 00:03:22.172 CC lib/util/cpuset.o 00:03:22.172 CC lib/util/crc16.o 00:03:22.172 CC lib/util/crc32.o 00:03:22.172 CC lib/util/crc32c.o 00:03:22.172 CC lib/ioat/ioat.o 00:03:22.172 CC lib/dma/dma.o 00:03:22.172 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.172 CC lib/util/crc32_ieee.o 00:03:22.172 CC lib/util/crc64.o 00:03:22.172 CC lib/vfio_user/host/vfio_user.o 00:03:22.172 CC lib/util/dif.o 00:03:22.172 CC lib/util/fd.o 00:03:22.172 LIB libspdk_dma.a 00:03:22.172 CC lib/util/fd_group.o 00:03:22.172 CC lib/util/file.o 00:03:22.172 SO libspdk_dma.so.5.0 00:03:22.172 CC lib/util/hexlify.o 00:03:22.172 SYMLINK libspdk_dma.so 00:03:22.430 CC lib/util/iov.o 00:03:22.430 LIB libspdk_ioat.a 00:03:22.430 SO libspdk_ioat.so.7.0 00:03:22.430 CC lib/util/math.o 00:03:22.430 CC lib/util/net.o 00:03:22.431 LIB libspdk_vfio_user.a 00:03:22.431 SYMLINK libspdk_ioat.so 00:03:22.431 CC lib/util/pipe.o 00:03:22.431 CC lib/util/strerror_tls.o 00:03:22.431 SO libspdk_vfio_user.so.5.0 00:03:22.431 CC lib/util/string.o 00:03:22.431 CC lib/util/uuid.o 00:03:22.431 SYMLINK libspdk_vfio_user.so 00:03:22.431 CC lib/util/xor.o 00:03:22.431 CC lib/util/zipf.o 00:03:22.431 CC lib/util/md5.o 00:03:22.689 LIB libspdk_trace_parser.a 00:03:22.689 SO libspdk_trace_parser.so.6.0 00:03:22.947 LIB libspdk_util.a 00:03:22.947 SYMLINK libspdk_trace_parser.so 00:03:22.947 SO libspdk_util.so.10.1 00:03:22.947 SYMLINK libspdk_util.so 00:03:23.206 CC lib/idxd/idxd.o 00:03:23.206 CC lib/idxd/idxd_user.o 00:03:23.206 CC lib/json/json_parse.o 00:03:23.206 CC lib/idxd/idxd_kernel.o 00:03:23.206 CC lib/env_dpdk/env.o 00:03:23.206 CC lib/json/json_util.o 00:03:23.206 CC lib/env_dpdk/memory.o 00:03:23.206 CC lib/rdma_utils/rdma_utils.o 00:03:23.206 CC lib/vmd/vmd.o 00:03:23.206 CC lib/conf/conf.o 00:03:23.206 CC lib/vmd/led.o 00:03:23.465 LIB libspdk_conf.a 00:03:23.465 SO libspdk_conf.so.6.0 00:03:23.465 CC lib/json/json_write.o 00:03:23.465 CC lib/env_dpdk/pci.o 00:03:23.465 CC lib/env_dpdk/init.o 00:03:23.465 SYMLINK libspdk_conf.so 00:03:23.465 CC lib/env_dpdk/threads.o 00:03:23.465 LIB libspdk_rdma_utils.a 00:03:23.465 SO libspdk_rdma_utils.so.1.0 00:03:23.465 CC lib/env_dpdk/pci_ioat.o 00:03:23.465 SYMLINK libspdk_rdma_utils.so 00:03:23.465 CC lib/env_dpdk/pci_virtio.o 00:03:23.465 CC lib/env_dpdk/pci_vmd.o 00:03:23.723 CC lib/env_dpdk/pci_idxd.o 00:03:23.723 CC lib/env_dpdk/pci_event.o 00:03:23.723 LIB libspdk_json.a 00:03:23.723 SO libspdk_json.so.6.0 00:03:23.723 LIB libspdk_vmd.a 00:03:23.723 CC lib/env_dpdk/sigbus_handler.o 00:03:23.723 LIB libspdk_idxd.a 00:03:23.723 SO libspdk_vmd.so.6.0 00:03:23.723 CC lib/env_dpdk/pci_dpdk.o 00:03:23.723 SO libspdk_idxd.so.12.1 00:03:23.723 SYMLINK libspdk_json.so 00:03:23.723 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:23.723 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.723 SYMLINK libspdk_vmd.so 00:03:23.723 SYMLINK libspdk_idxd.so 00:03:23.723 CC lib/rdma_provider/common.o 00:03:23.723 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:23.981 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.981 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.981 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.981 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.981 LIB libspdk_rdma_provider.a 00:03:23.981 SO libspdk_rdma_provider.so.7.0 00:03:24.240 SYMLINK libspdk_rdma_provider.so 00:03:24.240 LIB libspdk_jsonrpc.a 00:03:24.240 SO libspdk_jsonrpc.so.6.0 00:03:24.240 SYMLINK libspdk_jsonrpc.so 00:03:24.498 CC lib/rpc/rpc.o 00:03:24.756 LIB libspdk_env_dpdk.a 00:03:24.756 SO libspdk_env_dpdk.so.15.1 00:03:24.756 LIB libspdk_rpc.a 00:03:24.756 SO libspdk_rpc.so.6.0 00:03:24.756 SYMLINK libspdk_rpc.so 00:03:24.756 SYMLINK libspdk_env_dpdk.so 00:03:25.016 CC lib/notify/notify_rpc.o 00:03:25.016 CC lib/notify/notify.o 00:03:25.016 CC lib/keyring/keyring.o 00:03:25.016 CC lib/keyring/keyring_rpc.o 00:03:25.016 CC lib/trace/trace.o 00:03:25.016 CC lib/trace/trace_flags.o 00:03:25.016 CC lib/trace/trace_rpc.o 00:03:25.016 LIB libspdk_notify.a 00:03:25.275 SO libspdk_notify.so.6.0 00:03:25.275 LIB libspdk_keyring.a 00:03:25.275 SO libspdk_keyring.so.2.0 00:03:25.275 SYMLINK libspdk_notify.so 00:03:25.275 SYMLINK libspdk_keyring.so 00:03:25.275 LIB libspdk_trace.a 00:03:25.275 SO libspdk_trace.so.11.0 00:03:25.275 SYMLINK libspdk_trace.so 00:03:25.534 CC lib/thread/iobuf.o 00:03:25.534 CC lib/thread/thread.o 00:03:25.534 CC lib/sock/sock.o 00:03:25.534 CC lib/sock/sock_rpc.o 00:03:26.101 LIB libspdk_sock.a 00:03:26.101 SO libspdk_sock.so.10.0 00:03:26.101 SYMLINK libspdk_sock.so 00:03:26.360 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:26.360 CC lib/nvme/nvme_ctrlr.o 00:03:26.360 CC lib/nvme/nvme_ns.o 00:03:26.360 CC lib/nvme/nvme_fabric.o 00:03:26.360 CC lib/nvme/nvme.o 00:03:26.360 CC lib/nvme/nvme_ns_cmd.o 00:03:26.360 CC lib/nvme/nvme_qpair.o 00:03:26.360 CC lib/nvme/nvme_pcie.o 00:03:26.360 CC lib/nvme/nvme_pcie_common.o 00:03:26.926 CC lib/nvme/nvme_quirks.o 00:03:26.926 CC lib/nvme/nvme_transport.o 00:03:26.926 CC lib/nvme/nvme_discovery.o 00:03:26.926 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.926 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.926 CC lib/nvme/nvme_tcp.o 00:03:26.926 LIB libspdk_thread.a 00:03:27.185 SO libspdk_thread.so.11.0 00:03:27.185 SYMLINK libspdk_thread.so 00:03:27.185 CC lib/nvme/nvme_opal.o 00:03:27.185 CC lib/nvme/nvme_io_msg.o 00:03:27.185 CC lib/nvme/nvme_poll_group.o 00:03:27.185 CC lib/nvme/nvme_zns.o 00:03:27.444 CC lib/nvme/nvme_stubs.o 00:03:27.444 CC lib/nvme/nvme_auth.o 00:03:27.444 CC lib/nvme/nvme_cuse.o 00:03:27.444 CC lib/nvme/nvme_rdma.o 00:03:27.703 CC lib/accel/accel.o 00:03:27.703 CC lib/accel/accel_rpc.o 00:03:27.703 CC lib/blob/blobstore.o 00:03:27.703 CC lib/blob/request.o 00:03:27.703 CC lib/blob/zeroes.o 00:03:27.962 CC lib/blob/blob_bs_dev.o 00:03:27.962 CC lib/accel/accel_sw.o 00:03:27.962 CC lib/init/json_config.o 00:03:28.221 CC lib/init/subsystem.o 00:03:28.221 CC lib/virtio/virtio.o 00:03:28.221 CC lib/virtio/virtio_vhost_user.o 00:03:28.221 CC lib/init/subsystem_rpc.o 00:03:28.221 CC lib/init/rpc.o 00:03:28.221 CC lib/virtio/virtio_vfio_user.o 00:03:28.221 CC lib/virtio/virtio_pci.o 00:03:28.480 LIB libspdk_init.a 00:03:28.480 SO libspdk_init.so.6.0 00:03:28.480 LIB libspdk_accel.a 00:03:28.480 CC lib/fsdev/fsdev.o 00:03:28.480 CC lib/fsdev/fsdev_io.o 00:03:28.480 SYMLINK libspdk_init.so 00:03:28.480 CC lib/fsdev/fsdev_rpc.o 00:03:28.480 SO libspdk_accel.so.16.0 00:03:28.480 LIB libspdk_virtio.a 00:03:28.480 SYMLINK libspdk_accel.so 00:03:28.738 SO libspdk_virtio.so.7.0 00:03:28.738 CC lib/event/app.o 00:03:28.738 CC lib/event/log_rpc.o 00:03:28.738 CC lib/event/app_rpc.o 00:03:28.738 CC lib/event/reactor.o 00:03:28.738 SYMLINK libspdk_virtio.so 00:03:28.738 CC lib/event/scheduler_static.o 00:03:28.738 LIB libspdk_nvme.a 00:03:28.738 CC lib/bdev/bdev.o 00:03:28.738 CC lib/bdev/bdev_rpc.o 00:03:28.738 CC lib/bdev/bdev_zone.o 00:03:28.738 CC lib/bdev/part.o 00:03:28.738 SO libspdk_nvme.so.15.0 00:03:28.997 CC lib/bdev/scsi_nvme.o 00:03:28.997 LIB libspdk_fsdev.a 00:03:28.997 SO libspdk_fsdev.so.2.0 00:03:28.997 SYMLINK libspdk_nvme.so 00:03:28.997 SYMLINK libspdk_fsdev.so 00:03:28.997 LIB libspdk_event.a 00:03:29.256 SO libspdk_event.so.14.0 00:03:29.256 SYMLINK libspdk_event.so 00:03:29.256 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:29.822 LIB libspdk_fuse_dispatcher.a 00:03:29.822 SO libspdk_fuse_dispatcher.so.1.0 00:03:29.822 SYMLINK libspdk_fuse_dispatcher.so 00:03:31.198 LIB libspdk_blob.a 00:03:31.198 SO libspdk_blob.so.12.0 00:03:31.198 SYMLINK libspdk_blob.so 00:03:31.456 CC lib/lvol/lvol.o 00:03:31.456 CC lib/blobfs/blobfs.o 00:03:31.456 CC lib/blobfs/tree.o 00:03:31.456 LIB libspdk_bdev.a 00:03:31.456 SO libspdk_bdev.so.17.0 00:03:31.714 SYMLINK libspdk_bdev.so 00:03:31.714 CC lib/scsi/dev.o 00:03:31.714 CC lib/scsi/port.o 00:03:31.714 CC lib/scsi/lun.o 00:03:31.714 CC lib/scsi/scsi.o 00:03:31.714 CC lib/nvmf/ctrlr.o 00:03:31.714 CC lib/nbd/nbd.o 00:03:31.714 CC lib/ftl/ftl_core.o 00:03:31.714 CC lib/ublk/ublk.o 00:03:31.972 CC lib/ublk/ublk_rpc.o 00:03:31.972 CC lib/ftl/ftl_init.o 00:03:31.972 CC lib/ftl/ftl_layout.o 00:03:31.972 CC lib/ftl/ftl_debug.o 00:03:31.972 CC lib/ftl/ftl_io.o 00:03:32.231 CC lib/scsi/scsi_bdev.o 00:03:32.231 CC lib/ftl/ftl_sb.o 00:03:32.231 CC lib/nbd/nbd_rpc.o 00:03:32.231 LIB libspdk_blobfs.a 00:03:32.231 CC lib/ftl/ftl_l2p.o 00:03:32.231 CC lib/ftl/ftl_l2p_flat.o 00:03:32.231 LIB libspdk_ublk.a 00:03:32.231 SO libspdk_blobfs.so.11.0 00:03:32.231 SO libspdk_ublk.so.3.0 00:03:32.231 CC lib/ftl/ftl_nv_cache.o 00:03:32.231 LIB libspdk_nbd.a 00:03:32.231 CC lib/ftl/ftl_band.o 00:03:32.231 SYMLINK libspdk_blobfs.so 00:03:32.231 CC lib/ftl/ftl_band_ops.o 00:03:32.231 SO libspdk_nbd.so.7.0 00:03:32.490 SYMLINK libspdk_ublk.so 00:03:32.490 CC lib/ftl/ftl_writer.o 00:03:32.490 LIB libspdk_lvol.a 00:03:32.490 CC lib/ftl/ftl_rq.o 00:03:32.490 SYMLINK libspdk_nbd.so 00:03:32.490 CC lib/ftl/ftl_reloc.o 00:03:32.490 CC lib/ftl/ftl_l2p_cache.o 00:03:32.490 SO libspdk_lvol.so.11.0 00:03:32.490 SYMLINK libspdk_lvol.so 00:03:32.490 CC lib/ftl/ftl_p2l.o 00:03:32.490 CC lib/ftl/ftl_p2l_log.o 00:03:32.490 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.490 CC lib/scsi/scsi_pr.o 00:03:32.748 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:32.748 CC lib/scsi/scsi_rpc.o 00:03:32.748 CC lib/scsi/task.o 00:03:32.748 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:32.748 CC lib/nvmf/ctrlr_discovery.o 00:03:32.748 CC lib/nvmf/ctrlr_bdev.o 00:03:32.748 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:32.748 CC lib/nvmf/subsystem.o 00:03:33.006 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.006 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.006 LIB libspdk_scsi.a 00:03:33.006 SO libspdk_scsi.so.9.0 00:03:33.006 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.006 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.006 CC lib/nvmf/nvmf.o 00:03:33.264 SYMLINK libspdk_scsi.so 00:03:33.264 CC lib/nvmf/nvmf_rpc.o 00:03:33.264 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.264 CC lib/nvmf/transport.o 00:03:33.264 CC lib/nvmf/tcp.o 00:03:33.264 CC lib/nvmf/stubs.o 00:03:33.264 CC lib/nvmf/mdns_server.o 00:03:33.522 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.522 CC lib/iscsi/conn.o 00:03:33.522 CC lib/iscsi/init_grp.o 00:03:33.522 CC lib/nvmf/rdma.o 00:03:33.522 CC lib/vhost/vhost.o 00:03:33.779 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.779 CC lib/iscsi/iscsi.o 00:03:33.779 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.036 CC lib/nvmf/auth.o 00:03:34.036 CC lib/vhost/vhost_rpc.o 00:03:34.036 CC lib/vhost/vhost_scsi.o 00:03:34.036 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.036 CC lib/ftl/utils/ftl_conf.o 00:03:34.294 CC lib/ftl/utils/ftl_md.o 00:03:34.294 CC lib/ftl/utils/ftl_mempool.o 00:03:34.294 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.294 CC lib/vhost/vhost_blk.o 00:03:34.552 CC lib/vhost/rte_vhost_user.o 00:03:34.552 CC lib/ftl/utils/ftl_property.o 00:03:34.552 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.552 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.811 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.811 CC lib/iscsi/param.o 00:03:34.811 CC lib/iscsi/portal_grp.o 00:03:34.811 CC lib/iscsi/tgt_node.o 00:03:34.811 CC lib/iscsi/iscsi_subsystem.o 00:03:35.080 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.080 CC lib/iscsi/iscsi_rpc.o 00:03:35.080 CC lib/iscsi/task.o 00:03:35.080 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:35.080 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:35.080 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:35.339 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:35.339 CC lib/ftl/base/ftl_base_dev.o 00:03:35.339 CC lib/ftl/base/ftl_base_bdev.o 00:03:35.339 CC lib/ftl/ftl_trace.o 00:03:35.339 LIB libspdk_nvmf.a 00:03:35.339 LIB libspdk_iscsi.a 00:03:35.339 LIB libspdk_vhost.a 00:03:35.339 LIB libspdk_ftl.a 00:03:35.339 SO libspdk_nvmf.so.20.0 00:03:35.339 SO libspdk_iscsi.so.8.0 00:03:35.339 SO libspdk_vhost.so.8.0 00:03:35.597 SYMLINK libspdk_vhost.so 00:03:35.597 SYMLINK libspdk_iscsi.so 00:03:35.597 SO libspdk_ftl.so.9.0 00:03:35.597 SYMLINK libspdk_nvmf.so 00:03:35.855 SYMLINK libspdk_ftl.so 00:03:36.114 CC module/env_dpdk/env_dpdk_rpc.o 00:03:36.114 CC module/accel/error/accel_error.o 00:03:36.114 CC module/accel/dsa/accel_dsa.o 00:03:36.114 CC module/accel/ioat/accel_ioat.o 00:03:36.114 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.114 CC module/fsdev/aio/fsdev_aio.o 00:03:36.114 CC module/accel/iaa/accel_iaa.o 00:03:36.114 CC module/keyring/file/keyring.o 00:03:36.114 CC module/blob/bdev/blob_bdev.o 00:03:36.114 CC module/sock/posix/posix.o 00:03:36.114 LIB libspdk_env_dpdk_rpc.a 00:03:36.114 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.114 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.114 CC module/accel/error/accel_error_rpc.o 00:03:36.373 CC module/keyring/file/keyring_rpc.o 00:03:36.373 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.373 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.373 LIB libspdk_scheduler_dynamic.a 00:03:36.373 LIB libspdk_accel_error.a 00:03:36.373 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.373 LIB libspdk_keyring_file.a 00:03:36.373 LIB libspdk_blob_bdev.a 00:03:36.373 LIB libspdk_accel_ioat.a 00:03:36.373 SO libspdk_accel_error.so.2.0 00:03:36.373 SO libspdk_keyring_file.so.2.0 00:03:36.373 SO libspdk_accel_ioat.so.6.0 00:03:36.373 SO libspdk_blob_bdev.so.12.0 00:03:36.373 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.373 LIB libspdk_accel_iaa.a 00:03:36.373 SYMLINK libspdk_accel_error.so 00:03:36.373 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.373 SYMLINK libspdk_keyring_file.so 00:03:36.373 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.373 SO libspdk_accel_iaa.so.3.0 00:03:36.373 SYMLINK libspdk_blob_bdev.so 00:03:36.373 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:36.373 SYMLINK libspdk_accel_ioat.so 00:03:36.373 CC module/fsdev/aio/linux_aio_mgr.o 00:03:36.373 SYMLINK libspdk_accel_iaa.so 00:03:36.631 LIB libspdk_accel_dsa.a 00:03:36.631 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.632 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.632 SO libspdk_accel_dsa.so.5.0 00:03:36.632 CC module/keyring/linux/keyring.o 00:03:36.632 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.632 CC module/keyring/linux/keyring_rpc.o 00:03:36.632 SYMLINK libspdk_accel_dsa.so 00:03:36.632 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.632 CC module/bdev/delay/vbdev_delay.o 00:03:36.632 LIB libspdk_scheduler_gscheduler.a 00:03:36.632 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.632 LIB libspdk_keyring_linux.a 00:03:36.632 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.632 SO libspdk_keyring_linux.so.1.0 00:03:36.632 CC module/bdev/error/vbdev_error.o 00:03:36.632 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.632 CC module/bdev/gpt/gpt.o 00:03:36.632 LIB libspdk_sock_posix.a 00:03:36.632 CC module/bdev/lvol/vbdev_lvol.o 00:03:36.632 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.632 SYMLINK libspdk_keyring_linux.so 00:03:36.890 SO libspdk_sock_posix.so.6.0 00:03:36.890 CC module/bdev/malloc/bdev_malloc.o 00:03:36.890 LIB libspdk_fsdev_aio.a 00:03:36.890 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.890 SYMLINK libspdk_sock_posix.so 00:03:36.890 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.890 SO libspdk_fsdev_aio.so.1.0 00:03:36.890 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:36.890 CC module/bdev/null/bdev_null.o 00:03:36.890 SYMLINK libspdk_fsdev_aio.so 00:03:36.890 CC module/bdev/null/bdev_null_rpc.o 00:03:36.890 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.890 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.890 LIB libspdk_bdev_delay.a 00:03:36.890 SO libspdk_bdev_delay.so.6.0 00:03:36.890 LIB libspdk_blobfs_bdev.a 00:03:36.890 SYMLINK libspdk_bdev_delay.so 00:03:36.890 SO libspdk_blobfs_bdev.so.6.0 00:03:37.148 LIB libspdk_bdev_error.a 00:03:37.148 SYMLINK libspdk_blobfs_bdev.so 00:03:37.148 SO libspdk_bdev_error.so.6.0 00:03:37.148 LIB libspdk_bdev_malloc.a 00:03:37.148 LIB libspdk_bdev_gpt.a 00:03:37.148 SO libspdk_bdev_malloc.so.6.0 00:03:37.148 SYMLINK libspdk_bdev_error.so 00:03:37.149 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.149 CC module/bdev/nvme/bdev_nvme.o 00:03:37.149 SO libspdk_bdev_gpt.so.6.0 00:03:37.149 LIB libspdk_bdev_null.a 00:03:37.149 CC module/bdev/split/vbdev_split.o 00:03:37.149 SYMLINK libspdk_bdev_malloc.so 00:03:37.149 CC module/bdev/raid/bdev_raid.o 00:03:37.149 SO libspdk_bdev_null.so.6.0 00:03:37.149 SYMLINK libspdk_bdev_gpt.so 00:03:37.149 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.149 LIB libspdk_bdev_lvol.a 00:03:37.149 SYMLINK libspdk_bdev_null.so 00:03:37.149 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.149 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.149 SO libspdk_bdev_lvol.so.6.0 00:03:37.149 CC module/bdev/xnvme/bdev_xnvme.o 00:03:37.408 SYMLINK libspdk_bdev_lvol.so 00:03:37.408 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:37.408 CC module/bdev/aio/bdev_aio.o 00:03:37.408 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.408 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:37.408 CC module/bdev/aio/bdev_aio_rpc.o 00:03:37.408 CC module/bdev/nvme/nvme_rpc.o 00:03:37.408 LIB libspdk_bdev_split.a 00:03:37.408 LIB libspdk_bdev_passthru.a 00:03:37.408 SO libspdk_bdev_passthru.so.6.0 00:03:37.408 SO libspdk_bdev_split.so.6.0 00:03:37.408 CC module/bdev/nvme/bdev_mdns_client.o 00:03:37.408 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.408 SYMLINK libspdk_bdev_passthru.so 00:03:37.408 SYMLINK libspdk_bdev_split.so 00:03:37.408 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.668 LIB libspdk_bdev_xnvme.a 00:03:37.668 SO libspdk_bdev_xnvme.so.3.0 00:03:37.668 CC module/bdev/nvme/vbdev_opal.o 00:03:37.668 LIB libspdk_bdev_aio.a 00:03:37.668 LIB libspdk_bdev_zone_block.a 00:03:37.668 SYMLINK libspdk_bdev_xnvme.so 00:03:37.668 CC module/bdev/ftl/bdev_ftl.o 00:03:37.668 SO libspdk_bdev_aio.so.6.0 00:03:37.668 SO libspdk_bdev_zone_block.so.6.0 00:03:37.668 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.668 SYMLINK libspdk_bdev_aio.so 00:03:37.668 CC module/bdev/raid/raid0.o 00:03:37.668 CC module/bdev/raid/raid1.o 00:03:37.668 SYMLINK libspdk_bdev_zone_block.so 00:03:37.668 CC module/bdev/raid/concat.o 00:03:37.668 CC module/bdev/iscsi/bdev_iscsi.o 00:03:37.927 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:37.927 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:37.927 LIB libspdk_bdev_ftl.a 00:03:37.927 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:37.927 SO libspdk_bdev_ftl.so.6.0 00:03:37.927 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:37.927 SYMLINK libspdk_bdev_ftl.so 00:03:37.927 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:37.927 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.185 LIB libspdk_bdev_iscsi.a 00:03:38.185 SO libspdk_bdev_iscsi.so.6.0 00:03:38.185 SYMLINK libspdk_bdev_iscsi.so 00:03:38.185 LIB libspdk_bdev_raid.a 00:03:38.186 SO libspdk_bdev_raid.so.6.0 00:03:38.186 LIB libspdk_bdev_virtio.a 00:03:38.469 SYMLINK libspdk_bdev_raid.so 00:03:38.469 SO libspdk_bdev_virtio.so.6.0 00:03:38.469 SYMLINK libspdk_bdev_virtio.so 00:03:39.442 LIB libspdk_bdev_nvme.a 00:03:39.442 SO libspdk_bdev_nvme.so.7.1 00:03:39.442 SYMLINK libspdk_bdev_nvme.so 00:03:39.701 CC module/event/subsystems/vmd/vmd.o 00:03:39.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:39.701 CC module/event/subsystems/sock/sock.o 00:03:39.701 CC module/event/subsystems/iobuf/iobuf.o 00:03:39.701 CC module/event/subsystems/keyring/keyring.o 00:03:39.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:39.701 CC module/event/subsystems/fsdev/fsdev.o 00:03:39.959 CC module/event/subsystems/scheduler/scheduler.o 00:03:39.959 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:39.959 LIB libspdk_event_keyring.a 00:03:39.959 LIB libspdk_event_sock.a 00:03:39.959 LIB libspdk_event_scheduler.a 00:03:39.959 LIB libspdk_event_vmd.a 00:03:39.959 LIB libspdk_event_vhost_blk.a 00:03:39.959 LIB libspdk_event_iobuf.a 00:03:39.959 SO libspdk_event_keyring.so.1.0 00:03:39.959 SO libspdk_event_scheduler.so.4.0 00:03:39.959 LIB libspdk_event_fsdev.a 00:03:39.959 SO libspdk_event_sock.so.5.0 00:03:39.959 SO libspdk_event_vmd.so.6.0 00:03:39.959 SO libspdk_event_iobuf.so.3.0 00:03:39.959 SO libspdk_event_vhost_blk.so.3.0 00:03:39.959 SO libspdk_event_fsdev.so.1.0 00:03:39.959 SYMLINK libspdk_event_keyring.so 00:03:39.959 SYMLINK libspdk_event_scheduler.so 00:03:39.959 SYMLINK libspdk_event_sock.so 00:03:39.959 SYMLINK libspdk_event_vmd.so 00:03:39.959 SYMLINK libspdk_event_fsdev.so 00:03:39.959 SYMLINK libspdk_event_vhost_blk.so 00:03:39.959 SYMLINK libspdk_event_iobuf.so 00:03:40.217 CC module/event/subsystems/accel/accel.o 00:03:40.217 LIB libspdk_event_accel.a 00:03:40.476 SO libspdk_event_accel.so.6.0 00:03:40.476 SYMLINK libspdk_event_accel.so 00:03:40.735 CC module/event/subsystems/bdev/bdev.o 00:03:40.735 LIB libspdk_event_bdev.a 00:03:40.735 SO libspdk_event_bdev.so.6.0 00:03:40.995 SYMLINK libspdk_event_bdev.so 00:03:40.995 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:40.995 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:40.995 CC module/event/subsystems/ublk/ublk.o 00:03:40.995 CC module/event/subsystems/nbd/nbd.o 00:03:40.995 CC module/event/subsystems/scsi/scsi.o 00:03:41.254 LIB libspdk_event_ublk.a 00:03:41.254 LIB libspdk_event_nbd.a 00:03:41.254 LIB libspdk_event_scsi.a 00:03:41.254 SO libspdk_event_ublk.so.3.0 00:03:41.254 SO libspdk_event_nbd.so.6.0 00:03:41.254 SO libspdk_event_scsi.so.6.0 00:03:41.254 SYMLINK libspdk_event_nbd.so 00:03:41.254 SYMLINK libspdk_event_ublk.so 00:03:41.254 LIB libspdk_event_nvmf.a 00:03:41.254 SYMLINK libspdk_event_scsi.so 00:03:41.254 SO libspdk_event_nvmf.so.6.0 00:03:41.254 SYMLINK libspdk_event_nvmf.so 00:03:41.514 CC module/event/subsystems/iscsi/iscsi.o 00:03:41.514 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:41.514 LIB libspdk_event_iscsi.a 00:03:41.514 LIB libspdk_event_vhost_scsi.a 00:03:41.514 SO libspdk_event_vhost_scsi.so.3.0 00:03:41.514 SO libspdk_event_iscsi.so.6.0 00:03:41.514 SYMLINK libspdk_event_iscsi.so 00:03:41.514 SYMLINK libspdk_event_vhost_scsi.so 00:03:41.772 SO libspdk.so.6.0 00:03:41.772 SYMLINK libspdk.so 00:03:42.032 CC test/rpc_client/rpc_client_test.o 00:03:42.032 CXX app/trace/trace.o 00:03:42.032 CC app/trace_record/trace_record.o 00:03:42.032 TEST_HEADER include/spdk/accel.h 00:03:42.032 TEST_HEADER include/spdk/accel_module.h 00:03:42.032 TEST_HEADER include/spdk/assert.h 00:03:42.032 TEST_HEADER include/spdk/barrier.h 00:03:42.032 TEST_HEADER include/spdk/base64.h 00:03:42.032 TEST_HEADER include/spdk/bdev.h 00:03:42.032 TEST_HEADER include/spdk/bdev_module.h 00:03:42.032 TEST_HEADER include/spdk/bdev_zone.h 00:03:42.032 TEST_HEADER include/spdk/bit_array.h 00:03:42.032 TEST_HEADER include/spdk/bit_pool.h 00:03:42.032 TEST_HEADER include/spdk/blob_bdev.h 00:03:42.032 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:42.032 TEST_HEADER include/spdk/blobfs.h 00:03:42.032 TEST_HEADER include/spdk/blob.h 00:03:42.032 TEST_HEADER include/spdk/conf.h 00:03:42.032 TEST_HEADER include/spdk/config.h 00:03:42.032 TEST_HEADER include/spdk/cpuset.h 00:03:42.032 TEST_HEADER include/spdk/crc16.h 00:03:42.032 CC app/nvmf_tgt/nvmf_main.o 00:03:42.032 TEST_HEADER include/spdk/crc32.h 00:03:42.032 TEST_HEADER include/spdk/crc64.h 00:03:42.032 TEST_HEADER include/spdk/dif.h 00:03:42.032 TEST_HEADER include/spdk/dma.h 00:03:42.032 TEST_HEADER include/spdk/endian.h 00:03:42.032 TEST_HEADER include/spdk/env_dpdk.h 00:03:42.032 TEST_HEADER include/spdk/env.h 00:03:42.032 TEST_HEADER include/spdk/event.h 00:03:42.032 TEST_HEADER include/spdk/fd_group.h 00:03:42.032 TEST_HEADER include/spdk/fd.h 00:03:42.032 TEST_HEADER include/spdk/file.h 00:03:42.032 TEST_HEADER include/spdk/fsdev.h 00:03:42.032 TEST_HEADER include/spdk/fsdev_module.h 00:03:42.032 TEST_HEADER include/spdk/ftl.h 00:03:42.033 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:42.033 TEST_HEADER include/spdk/gpt_spec.h 00:03:42.033 TEST_HEADER include/spdk/hexlify.h 00:03:42.033 CC test/thread/poller_perf/poller_perf.o 00:03:42.033 TEST_HEADER include/spdk/histogram_data.h 00:03:42.033 TEST_HEADER include/spdk/idxd.h 00:03:42.033 TEST_HEADER include/spdk/idxd_spec.h 00:03:42.033 TEST_HEADER include/spdk/init.h 00:03:42.033 CC examples/util/zipf/zipf.o 00:03:42.033 TEST_HEADER include/spdk/ioat.h 00:03:42.033 TEST_HEADER include/spdk/ioat_spec.h 00:03:42.033 TEST_HEADER include/spdk/iscsi_spec.h 00:03:42.033 TEST_HEADER include/spdk/json.h 00:03:42.033 TEST_HEADER include/spdk/jsonrpc.h 00:03:42.033 CC test/dma/test_dma/test_dma.o 00:03:42.033 TEST_HEADER include/spdk/keyring.h 00:03:42.033 TEST_HEADER include/spdk/keyring_module.h 00:03:42.033 TEST_HEADER include/spdk/likely.h 00:03:42.033 TEST_HEADER include/spdk/log.h 00:03:42.033 TEST_HEADER include/spdk/lvol.h 00:03:42.033 TEST_HEADER include/spdk/md5.h 00:03:42.033 TEST_HEADER include/spdk/memory.h 00:03:42.033 CC test/app/bdev_svc/bdev_svc.o 00:03:42.033 TEST_HEADER include/spdk/mmio.h 00:03:42.033 TEST_HEADER include/spdk/nbd.h 00:03:42.033 TEST_HEADER include/spdk/net.h 00:03:42.033 TEST_HEADER include/spdk/notify.h 00:03:42.033 TEST_HEADER include/spdk/nvme.h 00:03:42.033 TEST_HEADER include/spdk/nvme_intel.h 00:03:42.033 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:42.033 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:42.033 TEST_HEADER include/spdk/nvme_spec.h 00:03:42.033 TEST_HEADER include/spdk/nvme_zns.h 00:03:42.033 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:42.033 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:42.033 LINK rpc_client_test 00:03:42.033 TEST_HEADER include/spdk/nvmf.h 00:03:42.033 CC test/env/mem_callbacks/mem_callbacks.o 00:03:42.033 TEST_HEADER include/spdk/nvmf_spec.h 00:03:42.033 TEST_HEADER include/spdk/nvmf_transport.h 00:03:42.033 TEST_HEADER include/spdk/opal.h 00:03:42.033 TEST_HEADER include/spdk/opal_spec.h 00:03:42.033 TEST_HEADER include/spdk/pci_ids.h 00:03:42.033 TEST_HEADER include/spdk/pipe.h 00:03:42.033 TEST_HEADER include/spdk/queue.h 00:03:42.033 TEST_HEADER include/spdk/reduce.h 00:03:42.033 TEST_HEADER include/spdk/rpc.h 00:03:42.033 TEST_HEADER include/spdk/scheduler.h 00:03:42.033 TEST_HEADER include/spdk/scsi.h 00:03:42.033 TEST_HEADER include/spdk/scsi_spec.h 00:03:42.033 TEST_HEADER include/spdk/sock.h 00:03:42.033 TEST_HEADER include/spdk/stdinc.h 00:03:42.033 TEST_HEADER include/spdk/string.h 00:03:42.033 TEST_HEADER include/spdk/thread.h 00:03:42.033 TEST_HEADER include/spdk/trace.h 00:03:42.033 TEST_HEADER include/spdk/trace_parser.h 00:03:42.033 TEST_HEADER include/spdk/tree.h 00:03:42.033 TEST_HEADER include/spdk/ublk.h 00:03:42.033 TEST_HEADER include/spdk/util.h 00:03:42.033 TEST_HEADER include/spdk/uuid.h 00:03:42.033 TEST_HEADER include/spdk/version.h 00:03:42.297 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:42.297 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:42.297 TEST_HEADER include/spdk/vhost.h 00:03:42.297 TEST_HEADER include/spdk/vmd.h 00:03:42.297 TEST_HEADER include/spdk/xor.h 00:03:42.297 TEST_HEADER include/spdk/zipf.h 00:03:42.297 CXX test/cpp_headers/accel.o 00:03:42.297 LINK poller_perf 00:03:42.297 LINK nvmf_tgt 00:03:42.297 LINK spdk_trace_record 00:03:42.297 LINK zipf 00:03:42.297 CXX test/cpp_headers/accel_module.o 00:03:42.297 LINK bdev_svc 00:03:42.297 CXX test/cpp_headers/assert.o 00:03:42.297 LINK spdk_trace 00:03:42.297 CXX test/cpp_headers/barrier.o 00:03:42.558 CXX test/cpp_headers/base64.o 00:03:42.558 CC app/iscsi_tgt/iscsi_tgt.o 00:03:42.558 LINK test_dma 00:03:42.558 CC test/event/event_perf/event_perf.o 00:03:42.558 CC examples/ioat/perf/perf.o 00:03:42.558 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:42.558 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.558 CC app/spdk_tgt/spdk_tgt.o 00:03:42.558 CXX test/cpp_headers/bdev.o 00:03:42.558 LINK mem_callbacks 00:03:42.558 LINK event_perf 00:03:42.558 CC test/app/histogram_perf/histogram_perf.o 00:03:42.558 LINK iscsi_tgt 00:03:42.818 LINK ioat_perf 00:03:42.818 CC examples/vmd/lsvmd/lsvmd.o 00:03:42.818 LINK spdk_tgt 00:03:42.818 CXX test/cpp_headers/bdev_module.o 00:03:42.818 CC test/event/reactor/reactor.o 00:03:42.818 LINK histogram_perf 00:03:42.818 CC test/env/vtophys/vtophys.o 00:03:42.818 LINK lsvmd 00:03:42.818 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.818 CC examples/ioat/verify/verify.o 00:03:42.818 LINK vtophys 00:03:42.818 LINK reactor 00:03:42.818 CXX test/cpp_headers/bdev_zone.o 00:03:42.818 LINK nvme_fuzz 00:03:43.079 CC app/spdk_lspci/spdk_lspci.o 00:03:43.079 LINK env_dpdk_post_init 00:03:43.079 CC examples/vmd/led/led.o 00:03:43.079 CC examples/idxd/perf/perf.o 00:03:43.079 LINK verify 00:03:43.079 CC test/event/reactor_perf/reactor_perf.o 00:03:43.079 CC test/event/app_repeat/app_repeat.o 00:03:43.080 LINK spdk_lspci 00:03:43.080 CXX test/cpp_headers/bit_array.o 00:03:43.080 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.080 LINK led 00:03:43.080 LINK app_repeat 00:03:43.341 CC test/env/memory/memory_ut.o 00:03:43.341 LINK reactor_perf 00:03:43.341 CXX test/cpp_headers/bit_pool.o 00:03:43.341 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.341 CC test/env/pci/pci_ut.o 00:03:43.341 CC app/spdk_nvme_perf/perf.o 00:03:43.341 CC app/spdk_nvme_identify/identify.o 00:03:43.341 LINK idxd_perf 00:03:43.341 CXX test/cpp_headers/blob_bdev.o 00:03:43.341 CC test/event/scheduler/scheduler.o 00:03:43.602 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.602 CXX test/cpp_headers/blobfs_bdev.o 00:03:43.602 LINK pci_ut 00:03:43.602 LINK interrupt_tgt 00:03:43.602 LINK scheduler 00:03:43.602 LINK vhost_fuzz 00:03:43.602 CXX test/cpp_headers/blobfs.o 00:03:43.861 CC examples/thread/thread/thread_ex.o 00:03:43.861 CXX test/cpp_headers/blob.o 00:03:43.861 CXX test/cpp_headers/conf.o 00:03:44.119 CC examples/sock/hello_world/hello_sock.o 00:03:44.119 CXX test/cpp_headers/config.o 00:03:44.119 CXX test/cpp_headers/cpuset.o 00:03:44.119 LINK thread 00:03:44.119 CC test/accel/dif/dif.o 00:03:44.119 CC test/blobfs/mkfs/mkfs.o 00:03:44.119 LINK spdk_nvme_identify 00:03:44.119 CXX test/cpp_headers/crc16.o 00:03:44.119 LINK spdk_nvme_perf 00:03:44.119 CC test/lvol/esnap/esnap.o 00:03:44.119 LINK mkfs 00:03:44.378 LINK hello_sock 00:03:44.378 CC test/app/jsoncat/jsoncat.o 00:03:44.378 LINK iscsi_fuzz 00:03:44.378 CXX test/cpp_headers/crc32.o 00:03:44.378 CXX test/cpp_headers/crc64.o 00:03:44.378 LINK memory_ut 00:03:44.378 LINK jsoncat 00:03:44.378 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.378 CC app/spdk_top/spdk_top.o 00:03:44.378 CXX test/cpp_headers/dif.o 00:03:44.637 CC test/app/stub/stub.o 00:03:44.637 CC examples/accel/perf/accel_perf.o 00:03:44.637 LINK dif 00:03:44.637 CC app/vhost/vhost.o 00:03:44.637 LINK spdk_nvme_discover 00:03:44.637 CC app/spdk_dd/spdk_dd.o 00:03:44.637 CXX test/cpp_headers/dma.o 00:03:44.637 CC app/fio/nvme/fio_plugin.o 00:03:44.637 LINK stub 00:03:44.637 LINK vhost 00:03:44.637 CXX test/cpp_headers/endian.o 00:03:44.637 CXX test/cpp_headers/env_dpdk.o 00:03:44.895 CC app/fio/bdev/fio_plugin.o 00:03:44.895 CXX test/cpp_headers/env.o 00:03:44.895 CC test/nvme/aer/aer.o 00:03:44.895 CC test/nvme/reset/reset.o 00:03:44.895 LINK spdk_dd 00:03:44.895 CXX test/cpp_headers/event.o 00:03:44.895 LINK accel_perf 00:03:45.153 CC examples/blob/hello_world/hello_blob.o 00:03:45.153 CXX test/cpp_headers/fd_group.o 00:03:45.153 CC examples/blob/cli/blobcli.o 00:03:45.153 LINK aer 00:03:45.153 LINK reset 00:03:45.153 LINK spdk_bdev 00:03:45.153 LINK spdk_nvme 00:03:45.153 CC test/nvme/sgl/sgl.o 00:03:45.153 LINK hello_blob 00:03:45.153 CXX test/cpp_headers/fd.o 00:03:45.412 CXX test/cpp_headers/file.o 00:03:45.412 CXX test/cpp_headers/fsdev.o 00:03:45.412 CXX test/cpp_headers/fsdev_module.o 00:03:45.412 LINK spdk_top 00:03:45.412 CXX test/cpp_headers/ftl.o 00:03:45.412 CC examples/nvme/hello_world/hello_world.o 00:03:45.412 LINK sgl 00:03:45.412 CC test/bdev/bdevio/bdevio.o 00:03:45.412 CC examples/nvme/reconnect/reconnect.o 00:03:45.412 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.670 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.670 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.670 CXX test/cpp_headers/fuse_dispatcher.o 00:03:45.670 LINK blobcli 00:03:45.670 LINK hello_world 00:03:45.670 CC test/nvme/e2edp/nvme_dp.o 00:03:45.670 CXX test/cpp_headers/gpt_spec.o 00:03:45.670 LINK reconnect 00:03:45.670 LINK bdevio 00:03:45.928 LINK hello_bdev 00:03:45.928 LINK hello_fsdev 00:03:45.928 CC test/nvme/overhead/overhead.o 00:03:45.928 CC test/nvme/err_injection/err_injection.o 00:03:45.928 CXX test/cpp_headers/hexlify.o 00:03:45.928 LINK nvme_dp 00:03:45.928 CXX test/cpp_headers/histogram_data.o 00:03:45.928 CXX test/cpp_headers/idxd.o 00:03:45.928 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.928 LINK err_injection 00:03:45.928 CC examples/nvme/arbitration/arbitration.o 00:03:45.928 CXX test/cpp_headers/idxd_spec.o 00:03:45.928 CXX test/cpp_headers/init.o 00:03:45.928 CC examples/nvme/hotplug/hotplug.o 00:03:46.187 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.187 CXX test/cpp_headers/ioat.o 00:03:46.187 LINK overhead 00:03:46.187 CXX test/cpp_headers/ioat_spec.o 00:03:46.187 LINK cmb_copy 00:03:46.187 CC examples/nvme/abort/abort.o 00:03:46.187 LINK hotplug 00:03:46.187 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.187 CC test/nvme/startup/startup.o 00:03:46.187 CXX test/cpp_headers/iscsi_spec.o 00:03:46.445 CXX test/cpp_headers/json.o 00:03:46.445 LINK arbitration 00:03:46.445 CXX test/cpp_headers/jsonrpc.o 00:03:46.445 CXX test/cpp_headers/keyring.o 00:03:46.445 LINK bdevperf 00:03:46.445 LINK pmr_persistence 00:03:46.445 CXX test/cpp_headers/keyring_module.o 00:03:46.445 LINK startup 00:03:46.445 LINK nvme_manage 00:03:46.445 CXX test/cpp_headers/likely.o 00:03:46.445 CXX test/cpp_headers/log.o 00:03:46.445 CXX test/cpp_headers/lvol.o 00:03:46.445 CXX test/cpp_headers/md5.o 00:03:46.704 CXX test/cpp_headers/memory.o 00:03:46.704 LINK abort 00:03:46.704 CXX test/cpp_headers/mmio.o 00:03:46.704 CXX test/cpp_headers/nbd.o 00:03:46.704 CXX test/cpp_headers/net.o 00:03:46.704 CXX test/cpp_headers/notify.o 00:03:46.704 CC test/nvme/reserve/reserve.o 00:03:46.704 CC test/nvme/simple_copy/simple_copy.o 00:03:46.704 CXX test/cpp_headers/nvme.o 00:03:46.704 CXX test/cpp_headers/nvme_intel.o 00:03:46.704 CC test/nvme/connect_stress/connect_stress.o 00:03:46.704 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.704 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.704 CC test/nvme/boot_partition/boot_partition.o 00:03:46.704 LINK reserve 00:03:46.962 CC test/nvme/compliance/nvme_compliance.o 00:03:46.962 LINK connect_stress 00:03:46.962 CC test/nvme/fused_ordering/fused_ordering.o 00:03:46.962 CC examples/nvmf/nvmf/nvmf.o 00:03:46.962 LINK simple_copy 00:03:46.962 CXX test/cpp_headers/nvme_spec.o 00:03:46.962 LINK boot_partition 00:03:46.962 CXX test/cpp_headers/nvme_zns.o 00:03:46.962 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.962 LINK fused_ordering 00:03:46.962 CC test/nvme/fdp/fdp.o 00:03:47.220 CXX test/cpp_headers/nvmf_cmd.o 00:03:47.220 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:47.220 LINK nvmf 00:03:47.220 CC test/nvme/cuse/cuse.o 00:03:47.220 CXX test/cpp_headers/nvmf.o 00:03:47.220 CXX test/cpp_headers/nvmf_spec.o 00:03:47.220 LINK doorbell_aers 00:03:47.220 LINK nvme_compliance 00:03:47.220 CXX test/cpp_headers/nvmf_transport.o 00:03:47.220 CXX test/cpp_headers/opal.o 00:03:47.220 CXX test/cpp_headers/opal_spec.o 00:03:47.220 CXX test/cpp_headers/pci_ids.o 00:03:47.220 CXX test/cpp_headers/pipe.o 00:03:47.220 CXX test/cpp_headers/queue.o 00:03:47.220 CXX test/cpp_headers/reduce.o 00:03:47.220 CXX test/cpp_headers/rpc.o 00:03:47.478 CXX test/cpp_headers/scheduler.o 00:03:47.478 LINK fdp 00:03:47.478 CXX test/cpp_headers/scsi.o 00:03:47.478 CXX test/cpp_headers/scsi_spec.o 00:03:47.478 CXX test/cpp_headers/sock.o 00:03:47.478 CXX test/cpp_headers/stdinc.o 00:03:47.478 CXX test/cpp_headers/string.o 00:03:47.478 CXX test/cpp_headers/thread.o 00:03:47.478 CXX test/cpp_headers/trace.o 00:03:47.478 CXX test/cpp_headers/trace_parser.o 00:03:47.478 CXX test/cpp_headers/tree.o 00:03:47.478 CXX test/cpp_headers/ublk.o 00:03:47.478 CXX test/cpp_headers/util.o 00:03:47.478 CXX test/cpp_headers/uuid.o 00:03:47.478 CXX test/cpp_headers/version.o 00:03:47.736 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.736 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.736 CXX test/cpp_headers/vhost.o 00:03:47.737 CXX test/cpp_headers/vmd.o 00:03:47.737 CXX test/cpp_headers/xor.o 00:03:47.737 CXX test/cpp_headers/zipf.o 00:03:47.997 LINK cuse 00:03:48.940 LINK esnap 00:03:49.202 00:03:49.202 real 1m7.105s 00:03:49.202 user 6m8.760s 00:03:49.202 sys 1m10.823s 00:03:49.202 19:01:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:49.202 ************************************ 00:03:49.202 END TEST make 00:03:49.202 ************************************ 00:03:49.202 19:01:58 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.202 19:01:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.202 19:01:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.202 19:01:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.202 19:01:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.202 19:01:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.202 19:01:58 -- pm/common@44 -- $ pid=5070 00:03:49.202 19:01:58 -- pm/common@50 -- $ kill -TERM 5070 00:03:49.202 19:01:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.202 19:01:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.202 19:01:58 -- pm/common@44 -- $ pid=5071 00:03:49.202 19:01:58 -- pm/common@50 -- $ kill -TERM 5071 00:03:49.202 19:01:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:49.202 19:01:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:49.202 19:01:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.202 19:01:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.202 19:01:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.202 19:01:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.202 19:01:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.202 19:01:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.202 19:01:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.202 19:01:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.202 19:01:58 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.202 19:01:58 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.202 19:01:58 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.202 19:01:58 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.202 19:01:58 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.202 19:01:58 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.202 19:01:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.202 19:01:58 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.202 19:01:58 -- scripts/common.sh@345 -- # : 1 00:03:49.202 19:01:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.202 19:01:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.202 19:01:58 -- scripts/common.sh@365 -- # decimal 1 00:03:49.202 19:01:58 -- scripts/common.sh@353 -- # local d=1 00:03:49.202 19:01:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.202 19:01:58 -- scripts/common.sh@355 -- # echo 1 00:03:49.202 19:01:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.202 19:01:58 -- scripts/common.sh@366 -- # decimal 2 00:03:49.202 19:01:58 -- scripts/common.sh@353 -- # local d=2 00:03:49.202 19:01:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.202 19:01:58 -- scripts/common.sh@355 -- # echo 2 00:03:49.202 19:01:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.202 19:01:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.202 19:01:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.202 19:01:58 -- scripts/common.sh@368 -- # return 0 00:03:49.202 19:01:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.202 19:01:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.202 --rc genhtml_branch_coverage=1 00:03:49.202 --rc genhtml_function_coverage=1 00:03:49.202 --rc genhtml_legend=1 00:03:49.202 --rc geninfo_all_blocks=1 00:03:49.202 --rc geninfo_unexecuted_blocks=1 00:03:49.202 00:03:49.202 ' 00:03:49.202 19:01:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.202 --rc genhtml_branch_coverage=1 00:03:49.202 --rc genhtml_function_coverage=1 00:03:49.202 --rc genhtml_legend=1 00:03:49.202 --rc geninfo_all_blocks=1 00:03:49.202 --rc geninfo_unexecuted_blocks=1 00:03:49.202 00:03:49.202 ' 00:03:49.202 19:01:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.202 --rc genhtml_branch_coverage=1 00:03:49.202 --rc genhtml_function_coverage=1 00:03:49.202 --rc genhtml_legend=1 00:03:49.202 --rc geninfo_all_blocks=1 00:03:49.202 --rc geninfo_unexecuted_blocks=1 00:03:49.202 00:03:49.202 ' 00:03:49.202 19:01:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.202 --rc genhtml_branch_coverage=1 00:03:49.202 --rc genhtml_function_coverage=1 00:03:49.202 --rc genhtml_legend=1 00:03:49.202 --rc geninfo_all_blocks=1 00:03:49.202 --rc geninfo_unexecuted_blocks=1 00:03:49.202 00:03:49.202 ' 00:03:49.202 19:01:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.202 19:01:58 -- nvmf/common.sh@7 -- # uname -s 00:03:49.202 19:01:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.202 19:01:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.202 19:01:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.202 19:01:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.202 19:01:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.202 19:01:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.202 19:01:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.202 19:01:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.202 19:01:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.202 19:01:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.202 19:01:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01172367-e710-474b-807e-39ce49b4e4e4 00:03:49.202 19:01:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=01172367-e710-474b-807e-39ce49b4e4e4 00:03:49.202 19:01:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.202 19:01:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.202 19:01:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.202 19:01:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.202 19:01:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.202 19:01:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.202 19:01:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.202 19:01:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.202 19:01:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.202 19:01:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.202 19:01:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.202 19:01:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.202 19:01:58 -- paths/export.sh@5 -- # export PATH 00:03:49.202 19:01:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.202 19:01:58 -- nvmf/common.sh@51 -- # : 0 00:03:49.202 19:01:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.202 19:01:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.202 19:01:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.202 19:01:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.202 19:01:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.202 19:01:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.202 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.202 19:01:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.202 19:01:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.202 19:01:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.202 19:01:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.202 19:01:58 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.202 19:01:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.202 19:01:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.202 19:01:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.463 19:01:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.463 19:01:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.463 19:01:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.463 19:01:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.463 19:01:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.463 19:01:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54261 00:03:49.463 19:01:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.463 19:01:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.463 19:01:58 -- pm/common@17 -- # local monitor 00:03:49.463 19:01:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.463 19:01:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.463 19:01:58 -- pm/common@25 -- # sleep 1 00:03:49.463 19:01:58 -- pm/common@21 -- # date +%s 00:03:49.463 19:01:58 -- pm/common@21 -- # date +%s 00:03:49.463 19:01:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732734118 00:03:49.463 19:01:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732734118 00:03:49.463 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732734118_collect-cpu-load.pm.log 00:03:49.463 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732734118_collect-vmstat.pm.log 00:03:50.413 19:01:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.413 19:01:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.413 19:01:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.413 19:01:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.413 19:01:59 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.413 19:01:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:50.413 19:01:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.413 19:01:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.413 19:01:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.413 19:01:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.413 19:01:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.413 19:01:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.413 19:01:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.413 19:01:59 -- common/autotest_common.sh@1457 -- # uname 00:03:50.414 19:01:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:50.414 19:01:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.414 19:01:59 -- common/autotest_common.sh@1477 -- # uname 00:03:50.414 19:01:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:50.414 19:01:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.414 19:01:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.414 lcov: LCOV version 1.15 00:03:50.414 19:02:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:08.536 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.444 19:02:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:23.444 19:02:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.444 19:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:23.444 19:02:30 -- spdk/autotest.sh@78 -- # rm -f 00:04:23.444 19:02:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.444 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:23.444 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:23.444 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:23.444 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:23.444 19:02:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:23.444 19:02:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:23.444 19:02:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:23.444 19:02:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.444 19:02:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:23.444 19:02:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:23.444 19:02:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.444 19:02:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:23.444 19:02:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.444 19:02:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.444 19:02:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:23.444 19:02:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:23.444 19:02:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.444 No valid GPT data, bailing 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.444 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.444 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.444 1+0 records in 00:04:23.444 1+0 records out 00:04:23.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288334 s, 36.4 MB/s 00:04:23.444 19:02:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.444 19:02:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.444 19:02:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:23.444 19:02:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:23.444 19:02:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:23.444 No valid GPT data, bailing 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.444 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.444 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:23.444 1+0 records in 00:04:23.444 1+0 records out 00:04:23.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536506 s, 195 MB/s 00:04:23.444 19:02:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.444 19:02:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.444 19:02:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:23.444 19:02:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:23.444 19:02:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:23.444 No valid GPT data, bailing 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.444 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.444 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:23.444 1+0 records in 00:04:23.444 1+0 records out 00:04:23.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562742 s, 186 MB/s 00:04:23.444 19:02:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.444 19:02:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.444 19:02:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:23.444 19:02:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:23.444 19:02:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:23.444 No valid GPT data, bailing 00:04:23.444 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:23.445 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.445 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.445 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:23.445 1+0 records in 00:04:23.445 1+0 records out 00:04:23.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495702 s, 212 MB/s 00:04:23.445 19:02:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.445 19:02:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.445 19:02:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:23.445 19:02:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:23.445 19:02:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:23.445 No valid GPT data, bailing 00:04:23.445 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:23.445 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.445 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.445 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:23.445 1+0 records in 00:04:23.445 1+0 records out 00:04:23.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00338854 s, 309 MB/s 00:04:23.445 19:02:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.445 19:02:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.445 19:02:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:23.445 19:02:32 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:23.445 19:02:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:23.445 No valid GPT data, bailing 00:04:23.445 19:02:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:23.445 19:02:32 -- scripts/common.sh@394 -- # pt= 00:04:23.445 19:02:32 -- scripts/common.sh@395 -- # return 1 00:04:23.445 19:02:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:23.445 1+0 records in 00:04:23.445 1+0 records out 00:04:23.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482514 s, 217 MB/s 00:04:23.445 19:02:32 -- spdk/autotest.sh@105 -- # sync 00:04:23.445 19:02:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.445 19:02:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.445 19:02:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.829 19:02:34 -- spdk/autotest.sh@111 -- # uname -s 00:04:24.829 19:02:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:24.829 19:02:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:24.829 19:02:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.661 Hugepages 00:04:25.661 node hugesize free / total 00:04:25.661 node0 1048576kB 0 / 0 00:04:25.661 node0 2048kB 0 / 0 00:04:25.661 00:04:25.661 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.661 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.661 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.661 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:25.661 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:25.921 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:25.921 19:02:35 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.921 19:02:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.921 19:02:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.921 19:02:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.754 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.754 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.754 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.754 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.754 19:02:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:28.140 19:02:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:28.140 19:02:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:28.140 19:02:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.140 19:02:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:28.140 19:02:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.140 19:02:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.140 19:02:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.140 19:02:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.140 19:02:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.140 19:02:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:28.140 19:02:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:28.140 19:02:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.401 Waiting for block devices as requested 00:04:28.401 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.401 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.661 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.661 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.953 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:33.953 19:02:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:33.953 19:02:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1543 -- # continue 00:04:33.953 19:02:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:33.953 19:02:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1543 -- # continue 00:04:33.953 19:02:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:33.953 19:02:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1543 -- # continue 00:04:33.953 19:02:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:33.953 19:02:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:33.953 19:02:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:33.953 19:02:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.953 19:02:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.954 19:02:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.954 19:02:43 -- common/autotest_common.sh@1543 -- # continue 00:04:33.954 19:02:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:33.954 19:02:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.954 19:02:43 -- common/autotest_common.sh@10 -- # set +x 00:04:33.954 19:02:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:33.954 19:02:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.954 19:02:43 -- common/autotest_common.sh@10 -- # set +x 00:04:33.954 19:02:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.098 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.098 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.098 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.098 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.098 19:02:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:35.098 19:02:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.098 19:02:44 -- common/autotest_common.sh@10 -- # set +x 00:04:35.098 19:02:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:35.098 19:02:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:35.098 19:02:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.098 19:02:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:35.098 19:02:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:35.098 19:02:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:35.098 19:02:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:35.098 19:02:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:35.098 19:02:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.098 19:02:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.098 19:02:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.098 19:02:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.098 19:02:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.359 19:02:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:35.359 19:02:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:35.359 19:02:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.359 19:02:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.359 19:02:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.359 19:02:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.359 19:02:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.359 19:02:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.359 19:02:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:35.359 19:02:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:35.359 19:02:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.359 19:02:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:35.359 19:02:44 -- common/autotest_common.sh@1572 -- # return 0 00:04:35.359 19:02:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:35.359 19:02:44 -- common/autotest_common.sh@1580 -- # return 0 00:04:35.359 19:02:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:35.359 19:02:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:35.359 19:02:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.359 19:02:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.359 19:02:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:35.359 19:02:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.359 19:02:44 -- common/autotest_common.sh@10 -- # set +x 00:04:35.359 19:02:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:35.360 19:02:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.360 19:02:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.360 19:02:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.360 19:02:44 -- common/autotest_common.sh@10 -- # set +x 00:04:35.360 ************************************ 00:04:35.360 START TEST env 00:04:35.360 ************************************ 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.360 * Looking for test storage... 00:04:35.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.360 19:02:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.360 19:02:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.360 19:02:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.360 19:02:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.360 19:02:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.360 19:02:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.360 19:02:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.360 19:02:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.360 19:02:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.360 19:02:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.360 19:02:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.360 19:02:44 env -- scripts/common.sh@344 -- # case "$op" in 00:04:35.360 19:02:44 env -- scripts/common.sh@345 -- # : 1 00:04:35.360 19:02:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.360 19:02:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.360 19:02:44 env -- scripts/common.sh@365 -- # decimal 1 00:04:35.360 19:02:44 env -- scripts/common.sh@353 -- # local d=1 00:04:35.360 19:02:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.360 19:02:44 env -- scripts/common.sh@355 -- # echo 1 00:04:35.360 19:02:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.360 19:02:44 env -- scripts/common.sh@366 -- # decimal 2 00:04:35.360 19:02:44 env -- scripts/common.sh@353 -- # local d=2 00:04:35.360 19:02:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.360 19:02:44 env -- scripts/common.sh@355 -- # echo 2 00:04:35.360 19:02:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.360 19:02:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.360 19:02:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.360 19:02:44 env -- scripts/common.sh@368 -- # return 0 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.360 --rc genhtml_branch_coverage=1 00:04:35.360 --rc genhtml_function_coverage=1 00:04:35.360 --rc genhtml_legend=1 00:04:35.360 --rc geninfo_all_blocks=1 00:04:35.360 --rc geninfo_unexecuted_blocks=1 00:04:35.360 00:04:35.360 ' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.360 --rc genhtml_branch_coverage=1 00:04:35.360 --rc genhtml_function_coverage=1 00:04:35.360 --rc genhtml_legend=1 00:04:35.360 --rc geninfo_all_blocks=1 00:04:35.360 --rc geninfo_unexecuted_blocks=1 00:04:35.360 00:04:35.360 ' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.360 --rc genhtml_branch_coverage=1 00:04:35.360 --rc genhtml_function_coverage=1 00:04:35.360 --rc genhtml_legend=1 00:04:35.360 --rc geninfo_all_blocks=1 00:04:35.360 --rc geninfo_unexecuted_blocks=1 00:04:35.360 00:04:35.360 ' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.360 --rc genhtml_branch_coverage=1 00:04:35.360 --rc genhtml_function_coverage=1 00:04:35.360 --rc genhtml_legend=1 00:04:35.360 --rc geninfo_all_blocks=1 00:04:35.360 --rc geninfo_unexecuted_blocks=1 00:04:35.360 00:04:35.360 ' 00:04:35.360 19:02:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.360 19:02:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.360 19:02:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.360 ************************************ 00:04:35.360 START TEST env_memory 00:04:35.360 ************************************ 00:04:35.360 19:02:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:35.621 00:04:35.621 00:04:35.621 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.621 http://cunit.sourceforge.net/ 00:04:35.621 00:04:35.621 00:04:35.621 Suite: memory 00:04:35.621 Test: alloc and free memory map ...[2024-11-27 19:02:45.047943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.621 passed 00:04:35.621 Test: mem map translation ...[2024-11-27 19:02:45.086882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.621 [2024-11-27 19:02:45.086933] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.621 [2024-11-27 19:02:45.086991] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.621 [2024-11-27 19:02:45.087005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.621 passed 00:04:35.621 Test: mem map registration ...[2024-11-27 19:02:45.155292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:35.621 [2024-11-27 19:02:45.155344] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:35.621 passed 00:04:35.621 Test: mem map adjacent registrations ...passed 00:04:35.621 00:04:35.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.621 suites 1 1 n/a 0 0 00:04:35.621 tests 4 4 4 0 0 00:04:35.621 asserts 152 152 152 0 n/a 00:04:35.621 00:04:35.621 Elapsed time = 0.233 seconds 00:04:35.882 00:04:35.882 real 0m0.272s 00:04:35.882 user 0m0.238s 00:04:35.882 sys 0m0.024s 00:04:35.882 19:02:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.882 ************************************ 00:04:35.882 END TEST env_memory 00:04:35.882 ************************************ 00:04:35.882 19:02:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.882 19:02:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.883 19:02:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.883 19:02:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.883 19:02:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.883 ************************************ 00:04:35.883 START TEST env_vtophys 00:04:35.883 ************************************ 00:04:35.883 19:02:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.883 EAL: lib.eal log level changed from notice to debug 00:04:35.883 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 1 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 2 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 3 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 4 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 5 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 6 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 7 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 8 as core 0 on socket 0 00:04:35.883 EAL: Detected lcore 9 as core 0 on socket 0 00:04:35.883 EAL: Maximum logical cores by configuration: 128 00:04:35.883 EAL: Detected CPU lcores: 10 00:04:35.883 EAL: Detected NUMA nodes: 1 00:04:35.883 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.883 EAL: Detected shared linkage of DPDK 00:04:35.883 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.883 EAL: Selected IOVA mode 'PA' 00:04:35.883 EAL: Probing VFIO support... 00:04:35.883 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.883 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:35.883 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.883 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.883 EAL: Setting up physically contiguous memory... 00:04:35.883 EAL: Setting maximum number of open files to 524288 00:04:35.883 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.883 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.883 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.883 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.883 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.883 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.883 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.883 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.883 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.883 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.883 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.883 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.883 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.883 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.883 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.883 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.883 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.883 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.883 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.883 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.883 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.883 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.883 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.883 EAL: Hugepages will be freed exactly as allocated. 00:04:35.883 EAL: No shared files mode enabled, IPC is disabled 00:04:35.883 EAL: No shared files mode enabled, IPC is disabled 00:04:35.883 EAL: TSC frequency is ~2600000 KHz 00:04:35.883 EAL: Main lcore 0 is ready (tid=7fb089ecba40;cpuset=[0]) 00:04:35.883 EAL: Trying to obtain current memory policy. 00:04:35.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.883 EAL: Restoring previous memory policy: 0 00:04:35.883 EAL: request: mp_malloc_sync 00:04:35.883 EAL: No shared files mode enabled, IPC is disabled 00:04:35.883 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.883 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.883 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.883 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.883 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:36.144 00:04:36.144 00:04:36.144 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.144 http://cunit.sourceforge.net/ 00:04:36.144 00:04:36.144 00:04:36.145 Suite: components_suite 00:04:36.405 Test: vtophys_malloc_test ...passed 00:04:36.405 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.405 EAL: Restoring previous memory policy: 4 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.405 EAL: Trying to obtain current memory policy. 00:04:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.405 EAL: Restoring previous memory policy: 4 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.405 EAL: Trying to obtain current memory policy. 00:04:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.405 EAL: Restoring previous memory policy: 4 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.405 EAL: Trying to obtain current memory policy. 00:04:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.405 EAL: Restoring previous memory policy: 4 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.405 EAL: request: mp_malloc_sync 00:04:36.405 EAL: No shared files mode enabled, IPC is disabled 00:04:36.405 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.405 EAL: Trying to obtain current memory policy. 00:04:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.668 EAL: Restoring previous memory policy: 4 00:04:36.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.668 EAL: request: mp_malloc_sync 00:04:36.668 EAL: No shared files mode enabled, IPC is disabled 00:04:36.668 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.668 EAL: request: mp_malloc_sync 00:04:36.668 EAL: No shared files mode enabled, IPC is disabled 00:04:36.668 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.668 EAL: Trying to obtain current memory policy. 00:04:36.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.668 EAL: Restoring previous memory policy: 4 00:04:36.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.668 EAL: request: mp_malloc_sync 00:04:36.668 EAL: No shared files mode enabled, IPC is disabled 00:04:36.668 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.668 EAL: request: mp_malloc_sync 00:04:36.668 EAL: No shared files mode enabled, IPC is disabled 00:04:36.668 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.668 EAL: Trying to obtain current memory policy. 00:04:36.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.930 EAL: Restoring previous memory policy: 4 00:04:36.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.930 EAL: request: mp_malloc_sync 00:04:36.930 EAL: No shared files mode enabled, IPC is disabled 00:04:36.930 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.930 EAL: request: mp_malloc_sync 00:04:36.930 EAL: No shared files mode enabled, IPC is disabled 00:04:36.930 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.191 EAL: Trying to obtain current memory policy. 00:04:37.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.191 EAL: Restoring previous memory policy: 4 00:04:37.191 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.191 EAL: request: mp_malloc_sync 00:04:37.191 EAL: No shared files mode enabled, IPC is disabled 00:04:37.191 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.714 EAL: request: mp_malloc_sync 00:04:37.714 EAL: No shared files mode enabled, IPC is disabled 00:04:37.714 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.976 EAL: Trying to obtain current memory policy. 00:04:37.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.976 EAL: Restoring previous memory policy: 4 00:04:37.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.976 EAL: request: mp_malloc_sync 00:04:37.976 EAL: No shared files mode enabled, IPC is disabled 00:04:37.976 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.917 EAL: request: mp_malloc_sync 00:04:38.917 EAL: No shared files mode enabled, IPC is disabled 00:04:38.917 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.176 EAL: Trying to obtain current memory policy. 00:04:39.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.434 EAL: Restoring previous memory policy: 4 00:04:39.434 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.434 EAL: request: mp_malloc_sync 00:04:39.434 EAL: No shared files mode enabled, IPC is disabled 00:04:39.434 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.369 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.369 EAL: request: mp_malloc_sync 00:04:40.369 EAL: No shared files mode enabled, IPC is disabled 00:04:40.369 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.337 passed 00:04:41.337 00:04:41.337 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.337 suites 1 1 n/a 0 0 00:04:41.337 tests 2 2 2 0 0 00:04:41.337 asserts 5726 5726 5726 0 n/a 00:04:41.337 00:04:41.337 Elapsed time = 5.184 seconds 00:04:41.337 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.337 EAL: request: mp_malloc_sync 00:04:41.337 EAL: No shared files mode enabled, IPC is disabled 00:04:41.337 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.337 EAL: No shared files mode enabled, IPC is disabled 00:04:41.337 EAL: No shared files mode enabled, IPC is disabled 00:04:41.337 EAL: No shared files mode enabled, IPC is disabled 00:04:41.337 00:04:41.337 real 0m5.463s 00:04:41.338 user 0m4.328s 00:04:41.338 sys 0m0.978s 00:04:41.338 19:02:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.338 19:02:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.338 ************************************ 00:04:41.338 END TEST env_vtophys 00:04:41.338 ************************************ 00:04:41.338 19:02:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.338 19:02:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.338 19:02:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.338 19:02:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.338 ************************************ 00:04:41.338 START TEST env_pci 00:04:41.338 ************************************ 00:04:41.338 19:02:50 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.338 00:04:41.338 00:04:41.338 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.338 http://cunit.sourceforge.net/ 00:04:41.338 00:04:41.338 00:04:41.338 Suite: pci 00:04:41.338 Test: pci_hook ...[2024-11-27 19:02:50.867861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57044 has claimed it 00:04:41.338 passed 00:04:41.338 00:04:41.338 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.338 suites 1 1 n/a 0 0 00:04:41.338 tests 1 1 1 0 0 00:04:41.338 asserts 25 25 25 0 n/a 00:04:41.338 00:04:41.338 Elapsed time = 0.004 seconds 00:04:41.338 EAL: Cannot find device (10000:00:01.0) 00:04:41.338 EAL: Failed to attach device on primary process 00:04:41.338 00:04:41.338 real 0m0.062s 00:04:41.338 user 0m0.026s 00:04:41.338 sys 0m0.036s 00:04:41.338 19:02:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.338 19:02:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.338 ************************************ 00:04:41.338 END TEST env_pci 00:04:41.338 ************************************ 00:04:41.338 19:02:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.338 19:02:50 env -- env/env.sh@15 -- # uname 00:04:41.338 19:02:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.338 19:02:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.338 19:02:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.338 19:02:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:41.338 19:02:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.338 19:02:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.338 ************************************ 00:04:41.338 START TEST env_dpdk_post_init 00:04:41.338 ************************************ 00:04:41.338 19:02:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.600 EAL: Detected CPU lcores: 10 00:04:41.600 EAL: Detected NUMA nodes: 1 00:04:41.600 EAL: Detected shared linkage of DPDK 00:04:41.600 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.600 EAL: Selected IOVA mode 'PA' 00:04:41.600 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:41.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:41.600 Starting DPDK initialization... 00:04:41.600 Starting SPDK post initialization... 00:04:41.600 SPDK NVMe probe 00:04:41.600 Attaching to 0000:00:10.0 00:04:41.600 Attaching to 0000:00:11.0 00:04:41.600 Attaching to 0000:00:12.0 00:04:41.600 Attaching to 0000:00:13.0 00:04:41.600 Attached to 0000:00:10.0 00:04:41.600 Attached to 0000:00:11.0 00:04:41.600 Attached to 0000:00:13.0 00:04:41.600 Attached to 0000:00:12.0 00:04:41.600 Cleaning up... 00:04:41.600 00:04:41.600 real 0m0.232s 00:04:41.600 user 0m0.070s 00:04:41.600 sys 0m0.065s 00:04:41.600 19:02:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.600 19:02:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.600 ************************************ 00:04:41.600 END TEST env_dpdk_post_init 00:04:41.600 ************************************ 00:04:41.600 19:02:51 env -- env/env.sh@26 -- # uname 00:04:41.600 19:02:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.600 19:02:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.600 19:02:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.600 19:02:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.600 19:02:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.600 ************************************ 00:04:41.600 START TEST env_mem_callbacks 00:04:41.600 ************************************ 00:04:41.600 19:02:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.860 EAL: Detected CPU lcores: 10 00:04:41.860 EAL: Detected NUMA nodes: 1 00:04:41.860 EAL: Detected shared linkage of DPDK 00:04:41.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.860 EAL: Selected IOVA mode 'PA' 00:04:41.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.860 00:04:41.860 00:04:41.860 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.860 http://cunit.sourceforge.net/ 00:04:41.860 00:04:41.860 00:04:41.860 Suite: memory 00:04:41.860 Test: test ... 00:04:41.860 register 0x200000200000 2097152 00:04:41.860 malloc 3145728 00:04:41.860 register 0x200000400000 4194304 00:04:41.860 buf 0x2000004fffc0 len 3145728 PASSED 00:04:41.860 malloc 64 00:04:41.860 buf 0x2000004ffec0 len 64 PASSED 00:04:41.860 malloc 4194304 00:04:41.860 register 0x200000800000 6291456 00:04:41.860 buf 0x2000009fffc0 len 4194304 PASSED 00:04:41.860 free 0x2000004fffc0 3145728 00:04:41.860 free 0x2000004ffec0 64 00:04:41.860 unregister 0x200000400000 4194304 PASSED 00:04:41.860 free 0x2000009fffc0 4194304 00:04:41.860 unregister 0x200000800000 6291456 PASSED 00:04:41.860 malloc 8388608 00:04:41.860 register 0x200000400000 10485760 00:04:41.860 buf 0x2000005fffc0 len 8388608 PASSED 00:04:41.860 free 0x2000005fffc0 8388608 00:04:41.860 unregister 0x200000400000 10485760 PASSED 00:04:41.860 passed 00:04:41.860 00:04:41.860 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.860 suites 1 1 n/a 0 0 00:04:41.860 tests 1 1 1 0 0 00:04:41.860 asserts 15 15 15 0 n/a 00:04:41.860 00:04:41.860 Elapsed time = 0.037 seconds 00:04:41.860 00:04:41.860 real 0m0.207s 00:04:41.860 user 0m0.055s 00:04:41.860 sys 0m0.049s 00:04:41.860 19:02:51 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.860 19:02:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.860 ************************************ 00:04:41.860 END TEST env_mem_callbacks 00:04:41.860 ************************************ 00:04:41.860 00:04:41.860 real 0m6.666s 00:04:41.860 user 0m4.853s 00:04:41.860 sys 0m1.407s 00:04:41.860 19:02:51 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.861 19:02:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.861 ************************************ 00:04:41.861 END TEST env 00:04:41.861 ************************************ 00:04:42.123 19:02:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.123 19:02:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.123 19:02:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.123 19:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:42.123 ************************************ 00:04:42.123 START TEST rpc 00:04:42.123 ************************************ 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.123 * Looking for test storage... 00:04:42.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.123 19:02:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.123 19:02:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.123 19:02:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.123 19:02:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.123 19:02:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.123 19:02:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.123 19:02:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.123 19:02:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.123 19:02:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.123 19:02:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.123 19:02:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.123 19:02:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.123 19:02:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.123 19:02:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.123 19:02:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.123 --rc genhtml_branch_coverage=1 00:04:42.123 --rc genhtml_function_coverage=1 00:04:42.123 --rc genhtml_legend=1 00:04:42.123 --rc geninfo_all_blocks=1 00:04:42.123 --rc geninfo_unexecuted_blocks=1 00:04:42.123 00:04:42.123 ' 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.123 --rc genhtml_branch_coverage=1 00:04:42.123 --rc genhtml_function_coverage=1 00:04:42.123 --rc genhtml_legend=1 00:04:42.123 --rc geninfo_all_blocks=1 00:04:42.123 --rc geninfo_unexecuted_blocks=1 00:04:42.123 00:04:42.123 ' 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.123 --rc genhtml_branch_coverage=1 00:04:42.123 --rc genhtml_function_coverage=1 00:04:42.123 --rc genhtml_legend=1 00:04:42.123 --rc geninfo_all_blocks=1 00:04:42.123 --rc geninfo_unexecuted_blocks=1 00:04:42.123 00:04:42.123 ' 00:04:42.123 19:02:51 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.123 --rc genhtml_branch_coverage=1 00:04:42.123 --rc genhtml_function_coverage=1 00:04:42.123 --rc genhtml_legend=1 00:04:42.123 --rc geninfo_all_blocks=1 00:04:42.123 --rc geninfo_unexecuted_blocks=1 00:04:42.123 00:04:42.123 ' 00:04:42.123 19:02:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57171 00:04:42.123 19:02:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.124 19:02:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57171 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@835 -- # '[' -z 57171 ']' 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.124 19:02:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.124 19:02:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.124 [2024-11-27 19:02:51.726480] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:42.124 [2024-11-27 19:02:51.726603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57171 ] 00:04:42.383 [2024-11-27 19:02:51.882451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.383 [2024-11-27 19:02:51.973272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.383 [2024-11-27 19:02:51.973323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57171' to capture a snapshot of events at runtime. 00:04:42.383 [2024-11-27 19:02:51.973332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.383 [2024-11-27 19:02:51.973341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.383 [2024-11-27 19:02:51.973348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57171 for offline analysis/debug. 00:04:42.383 [2024-11-27 19:02:51.974051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.317 19:02:52 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.317 19:02:52 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.317 19:02:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.317 19:02:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.317 19:02:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.318 19:02:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.318 19:02:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.318 19:02:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.318 19:02:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 ************************************ 00:04:43.318 START TEST rpc_integrity 00:04:43.318 ************************************ 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.318 { 00:04:43.318 "name": "Malloc0", 00:04:43.318 "aliases": [ 00:04:43.318 "87d62d25-8217-47d5-b6ee-0c962da6388f" 00:04:43.318 ], 00:04:43.318 "product_name": "Malloc disk", 00:04:43.318 "block_size": 512, 00:04:43.318 "num_blocks": 16384, 00:04:43.318 "uuid": "87d62d25-8217-47d5-b6ee-0c962da6388f", 00:04:43.318 "assigned_rate_limits": { 00:04:43.318 "rw_ios_per_sec": 0, 00:04:43.318 "rw_mbytes_per_sec": 0, 00:04:43.318 "r_mbytes_per_sec": 0, 00:04:43.318 "w_mbytes_per_sec": 0 00:04:43.318 }, 00:04:43.318 "claimed": false, 00:04:43.318 "zoned": false, 00:04:43.318 "supported_io_types": { 00:04:43.318 "read": true, 00:04:43.318 "write": true, 00:04:43.318 "unmap": true, 00:04:43.318 "flush": true, 00:04:43.318 "reset": true, 00:04:43.318 "nvme_admin": false, 00:04:43.318 "nvme_io": false, 00:04:43.318 "nvme_io_md": false, 00:04:43.318 "write_zeroes": true, 00:04:43.318 "zcopy": true, 00:04:43.318 "get_zone_info": false, 00:04:43.318 "zone_management": false, 00:04:43.318 "zone_append": false, 00:04:43.318 "compare": false, 00:04:43.318 "compare_and_write": false, 00:04:43.318 "abort": true, 00:04:43.318 "seek_hole": false, 00:04:43.318 "seek_data": false, 00:04:43.318 "copy": true, 00:04:43.318 "nvme_iov_md": false 00:04:43.318 }, 00:04:43.318 "memory_domains": [ 00:04:43.318 { 00:04:43.318 "dma_device_id": "system", 00:04:43.318 "dma_device_type": 1 00:04:43.318 }, 00:04:43.318 { 00:04:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.318 "dma_device_type": 2 00:04:43.318 } 00:04:43.318 ], 00:04:43.318 "driver_specific": {} 00:04:43.318 } 00:04:43.318 ]' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 [2024-11-27 19:02:52.733658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.318 [2024-11-27 19:02:52.733710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.318 [2024-11-27 19:02:52.733730] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:43.318 [2024-11-27 19:02:52.733740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.318 [2024-11-27 19:02:52.735571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.318 [2024-11-27 19:02:52.735603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.318 Passthru0 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.318 { 00:04:43.318 "name": "Malloc0", 00:04:43.318 "aliases": [ 00:04:43.318 "87d62d25-8217-47d5-b6ee-0c962da6388f" 00:04:43.318 ], 00:04:43.318 "product_name": "Malloc disk", 00:04:43.318 "block_size": 512, 00:04:43.318 "num_blocks": 16384, 00:04:43.318 "uuid": "87d62d25-8217-47d5-b6ee-0c962da6388f", 00:04:43.318 "assigned_rate_limits": { 00:04:43.318 "rw_ios_per_sec": 0, 00:04:43.318 "rw_mbytes_per_sec": 0, 00:04:43.318 "r_mbytes_per_sec": 0, 00:04:43.318 "w_mbytes_per_sec": 0 00:04:43.318 }, 00:04:43.318 "claimed": true, 00:04:43.318 "claim_type": "exclusive_write", 00:04:43.318 "zoned": false, 00:04:43.318 "supported_io_types": { 00:04:43.318 "read": true, 00:04:43.318 "write": true, 00:04:43.318 "unmap": true, 00:04:43.318 "flush": true, 00:04:43.318 "reset": true, 00:04:43.318 "nvme_admin": false, 00:04:43.318 "nvme_io": false, 00:04:43.318 "nvme_io_md": false, 00:04:43.318 "write_zeroes": true, 00:04:43.318 "zcopy": true, 00:04:43.318 "get_zone_info": false, 00:04:43.318 "zone_management": false, 00:04:43.318 "zone_append": false, 00:04:43.318 "compare": false, 00:04:43.318 "compare_and_write": false, 00:04:43.318 "abort": true, 00:04:43.318 "seek_hole": false, 00:04:43.318 "seek_data": false, 00:04:43.318 "copy": true, 00:04:43.318 "nvme_iov_md": false 00:04:43.318 }, 00:04:43.318 "memory_domains": [ 00:04:43.318 { 00:04:43.318 "dma_device_id": "system", 00:04:43.318 "dma_device_type": 1 00:04:43.318 }, 00:04:43.318 { 00:04:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.318 "dma_device_type": 2 00:04:43.318 } 00:04:43.318 ], 00:04:43.318 "driver_specific": {} 00:04:43.318 }, 00:04:43.318 { 00:04:43.318 "name": "Passthru0", 00:04:43.318 "aliases": [ 00:04:43.318 "212e1b4d-359a-5777-9793-3052f5e85c08" 00:04:43.318 ], 00:04:43.318 "product_name": "passthru", 00:04:43.318 "block_size": 512, 00:04:43.318 "num_blocks": 16384, 00:04:43.318 "uuid": "212e1b4d-359a-5777-9793-3052f5e85c08", 00:04:43.318 "assigned_rate_limits": { 00:04:43.318 "rw_ios_per_sec": 0, 00:04:43.318 "rw_mbytes_per_sec": 0, 00:04:43.318 "r_mbytes_per_sec": 0, 00:04:43.318 "w_mbytes_per_sec": 0 00:04:43.318 }, 00:04:43.318 "claimed": false, 00:04:43.318 "zoned": false, 00:04:43.318 "supported_io_types": { 00:04:43.318 "read": true, 00:04:43.318 "write": true, 00:04:43.318 "unmap": true, 00:04:43.318 "flush": true, 00:04:43.318 "reset": true, 00:04:43.318 "nvme_admin": false, 00:04:43.318 "nvme_io": false, 00:04:43.318 "nvme_io_md": false, 00:04:43.318 "write_zeroes": true, 00:04:43.318 "zcopy": true, 00:04:43.318 "get_zone_info": false, 00:04:43.318 "zone_management": false, 00:04:43.318 "zone_append": false, 00:04:43.318 "compare": false, 00:04:43.318 "compare_and_write": false, 00:04:43.318 "abort": true, 00:04:43.318 "seek_hole": false, 00:04:43.318 "seek_data": false, 00:04:43.318 "copy": true, 00:04:43.318 "nvme_iov_md": false 00:04:43.318 }, 00:04:43.318 "memory_domains": [ 00:04:43.318 { 00:04:43.318 "dma_device_id": "system", 00:04:43.318 "dma_device_type": 1 00:04:43.318 }, 00:04:43.318 { 00:04:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.318 "dma_device_type": 2 00:04:43.318 } 00:04:43.318 ], 00:04:43.318 "driver_specific": { 00:04:43.318 "passthru": { 00:04:43.318 "name": "Passthru0", 00:04:43.318 "base_bdev_name": "Malloc0" 00:04:43.318 } 00:04:43.318 } 00:04:43.318 } 00:04:43.318 ]' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.318 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.318 19:02:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.319 00:04:43.319 real 0m0.242s 00:04:43.319 user 0m0.136s 00:04:43.319 sys 0m0.027s 00:04:43.319 ************************************ 00:04:43.319 END TEST rpc_integrity 00:04:43.319 ************************************ 00:04:43.319 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.319 19:02:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 19:02:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.319 19:02:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.319 19:02:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.319 19:02:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 ************************************ 00:04:43.319 START TEST rpc_plugins 00:04:43.319 ************************************ 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:43.319 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.319 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.319 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.319 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.319 { 00:04:43.319 "name": "Malloc1", 00:04:43.319 "aliases": [ 00:04:43.319 "abe40e8d-9d8f-459b-9f16-6adc531a35da" 00:04:43.319 ], 00:04:43.319 "product_name": "Malloc disk", 00:04:43.319 "block_size": 4096, 00:04:43.319 "num_blocks": 256, 00:04:43.319 "uuid": "abe40e8d-9d8f-459b-9f16-6adc531a35da", 00:04:43.319 "assigned_rate_limits": { 00:04:43.319 "rw_ios_per_sec": 0, 00:04:43.319 "rw_mbytes_per_sec": 0, 00:04:43.319 "r_mbytes_per_sec": 0, 00:04:43.319 "w_mbytes_per_sec": 0 00:04:43.319 }, 00:04:43.319 "claimed": false, 00:04:43.319 "zoned": false, 00:04:43.319 "supported_io_types": { 00:04:43.319 "read": true, 00:04:43.319 "write": true, 00:04:43.319 "unmap": true, 00:04:43.319 "flush": true, 00:04:43.319 "reset": true, 00:04:43.319 "nvme_admin": false, 00:04:43.319 "nvme_io": false, 00:04:43.319 "nvme_io_md": false, 00:04:43.319 "write_zeroes": true, 00:04:43.319 "zcopy": true, 00:04:43.319 "get_zone_info": false, 00:04:43.319 "zone_management": false, 00:04:43.319 "zone_append": false, 00:04:43.319 "compare": false, 00:04:43.319 "compare_and_write": false, 00:04:43.319 "abort": true, 00:04:43.319 "seek_hole": false, 00:04:43.319 "seek_data": false, 00:04:43.319 "copy": true, 00:04:43.319 "nvme_iov_md": false 00:04:43.319 }, 00:04:43.319 "memory_domains": [ 00:04:43.319 { 00:04:43.319 "dma_device_id": "system", 00:04:43.319 "dma_device_type": 1 00:04:43.319 }, 00:04:43.319 { 00:04:43.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.319 "dma_device_type": 2 00:04:43.319 } 00:04:43.319 ], 00:04:43.319 "driver_specific": {} 00:04:43.319 } 00:04:43.319 ]' 00:04:43.319 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.577 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.577 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.577 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.577 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.577 19:02:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.577 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.577 19:02:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 19:02:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.577 19:02:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.577 19:02:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.577 19:02:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.577 00:04:43.577 real 0m0.114s 00:04:43.577 user 0m0.058s 00:04:43.577 sys 0m0.017s 00:04:43.577 19:02:53 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.577 ************************************ 00:04:43.577 END TEST rpc_plugins 00:04:43.577 ************************************ 00:04:43.577 19:02:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 19:02:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.577 19:02:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.577 19:02:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.577 19:02:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 ************************************ 00:04:43.577 START TEST rpc_trace_cmd_test 00:04:43.577 ************************************ 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.577 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.577 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57171", 00:04:43.577 "tpoint_group_mask": "0x8", 00:04:43.577 "iscsi_conn": { 00:04:43.577 "mask": "0x2", 00:04:43.577 "tpoint_mask": "0x0" 00:04:43.577 }, 00:04:43.577 "scsi": { 00:04:43.577 "mask": "0x4", 00:04:43.577 "tpoint_mask": "0x0" 00:04:43.577 }, 00:04:43.577 "bdev": { 00:04:43.577 "mask": "0x8", 00:04:43.577 "tpoint_mask": "0xffffffffffffffff" 00:04:43.577 }, 00:04:43.577 "nvmf_rdma": { 00:04:43.577 "mask": "0x10", 00:04:43.577 "tpoint_mask": "0x0" 00:04:43.577 }, 00:04:43.577 "nvmf_tcp": { 00:04:43.577 "mask": "0x20", 00:04:43.577 "tpoint_mask": "0x0" 00:04:43.577 }, 00:04:43.577 "ftl": { 00:04:43.577 "mask": "0x40", 00:04:43.577 "tpoint_mask": "0x0" 00:04:43.577 }, 00:04:43.578 "blobfs": { 00:04:43.578 "mask": "0x80", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "dsa": { 00:04:43.578 "mask": "0x200", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "thread": { 00:04:43.578 "mask": "0x400", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "nvme_pcie": { 00:04:43.578 "mask": "0x800", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "iaa": { 00:04:43.578 "mask": "0x1000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "nvme_tcp": { 00:04:43.578 "mask": "0x2000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "bdev_nvme": { 00:04:43.578 "mask": "0x4000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "sock": { 00:04:43.578 "mask": "0x8000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "blob": { 00:04:43.578 "mask": "0x10000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "bdev_raid": { 00:04:43.578 "mask": "0x20000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 }, 00:04:43.578 "scheduler": { 00:04:43.578 "mask": "0x40000", 00:04:43.578 "tpoint_mask": "0x0" 00:04:43.578 } 00:04:43.578 }' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.578 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.836 19:02:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.836 00:04:43.836 real 0m0.155s 00:04:43.836 user 0m0.125s 00:04:43.836 sys 0m0.021s 00:04:43.836 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.836 ************************************ 00:04:43.836 END TEST rpc_trace_cmd_test 00:04:43.836 ************************************ 00:04:43.836 19:02:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.836 19:02:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.836 19:02:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.836 19:02:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.836 19:02:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.836 19:02:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.836 19:02:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.836 ************************************ 00:04:43.836 START TEST rpc_daemon_integrity 00:04:43.836 ************************************ 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.836 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.837 { 00:04:43.837 "name": "Malloc2", 00:04:43.837 "aliases": [ 00:04:43.837 "ad28b475-3f0f-4396-96f2-85396c760cad" 00:04:43.837 ], 00:04:43.837 "product_name": "Malloc disk", 00:04:43.837 "block_size": 512, 00:04:43.837 "num_blocks": 16384, 00:04:43.837 "uuid": "ad28b475-3f0f-4396-96f2-85396c760cad", 00:04:43.837 "assigned_rate_limits": { 00:04:43.837 "rw_ios_per_sec": 0, 00:04:43.837 "rw_mbytes_per_sec": 0, 00:04:43.837 "r_mbytes_per_sec": 0, 00:04:43.837 "w_mbytes_per_sec": 0 00:04:43.837 }, 00:04:43.837 "claimed": false, 00:04:43.837 "zoned": false, 00:04:43.837 "supported_io_types": { 00:04:43.837 "read": true, 00:04:43.837 "write": true, 00:04:43.837 "unmap": true, 00:04:43.837 "flush": true, 00:04:43.837 "reset": true, 00:04:43.837 "nvme_admin": false, 00:04:43.837 "nvme_io": false, 00:04:43.837 "nvme_io_md": false, 00:04:43.837 "write_zeroes": true, 00:04:43.837 "zcopy": true, 00:04:43.837 "get_zone_info": false, 00:04:43.837 "zone_management": false, 00:04:43.837 "zone_append": false, 00:04:43.837 "compare": false, 00:04:43.837 "compare_and_write": false, 00:04:43.837 "abort": true, 00:04:43.837 "seek_hole": false, 00:04:43.837 "seek_data": false, 00:04:43.837 "copy": true, 00:04:43.837 "nvme_iov_md": false 00:04:43.837 }, 00:04:43.837 "memory_domains": [ 00:04:43.837 { 00:04:43.837 "dma_device_id": "system", 00:04:43.837 "dma_device_type": 1 00:04:43.837 }, 00:04:43.837 { 00:04:43.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.837 "dma_device_type": 2 00:04:43.837 } 00:04:43.837 ], 00:04:43.837 "driver_specific": {} 00:04:43.837 } 00:04:43.837 ]' 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.837 [2024-11-27 19:02:53.401765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.837 [2024-11-27 19:02:53.401805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.837 [2024-11-27 19:02:53.401822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:43.837 [2024-11-27 19:02:53.401830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.837 [2024-11-27 19:02:53.403608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.837 [2024-11-27 19:02:53.403635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.837 Passthru0 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.837 { 00:04:43.837 "name": "Malloc2", 00:04:43.837 "aliases": [ 00:04:43.837 "ad28b475-3f0f-4396-96f2-85396c760cad" 00:04:43.837 ], 00:04:43.837 "product_name": "Malloc disk", 00:04:43.837 "block_size": 512, 00:04:43.837 "num_blocks": 16384, 00:04:43.837 "uuid": "ad28b475-3f0f-4396-96f2-85396c760cad", 00:04:43.837 "assigned_rate_limits": { 00:04:43.837 "rw_ios_per_sec": 0, 00:04:43.837 "rw_mbytes_per_sec": 0, 00:04:43.837 "r_mbytes_per_sec": 0, 00:04:43.837 "w_mbytes_per_sec": 0 00:04:43.837 }, 00:04:43.837 "claimed": true, 00:04:43.837 "claim_type": "exclusive_write", 00:04:43.837 "zoned": false, 00:04:43.837 "supported_io_types": { 00:04:43.837 "read": true, 00:04:43.837 "write": true, 00:04:43.837 "unmap": true, 00:04:43.837 "flush": true, 00:04:43.837 "reset": true, 00:04:43.837 "nvme_admin": false, 00:04:43.837 "nvme_io": false, 00:04:43.837 "nvme_io_md": false, 00:04:43.837 "write_zeroes": true, 00:04:43.837 "zcopy": true, 00:04:43.837 "get_zone_info": false, 00:04:43.837 "zone_management": false, 00:04:43.837 "zone_append": false, 00:04:43.837 "compare": false, 00:04:43.837 "compare_and_write": false, 00:04:43.837 "abort": true, 00:04:43.837 "seek_hole": false, 00:04:43.837 "seek_data": false, 00:04:43.837 "copy": true, 00:04:43.837 "nvme_iov_md": false 00:04:43.837 }, 00:04:43.837 "memory_domains": [ 00:04:43.837 { 00:04:43.837 "dma_device_id": "system", 00:04:43.837 "dma_device_type": 1 00:04:43.837 }, 00:04:43.837 { 00:04:43.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.837 "dma_device_type": 2 00:04:43.837 } 00:04:43.837 ], 00:04:43.837 "driver_specific": {} 00:04:43.837 }, 00:04:43.837 { 00:04:43.837 "name": "Passthru0", 00:04:43.837 "aliases": [ 00:04:43.837 "d57771db-6e21-58b6-912d-845ed7435a30" 00:04:43.837 ], 00:04:43.837 "product_name": "passthru", 00:04:43.837 "block_size": 512, 00:04:43.837 "num_blocks": 16384, 00:04:43.837 "uuid": "d57771db-6e21-58b6-912d-845ed7435a30", 00:04:43.837 "assigned_rate_limits": { 00:04:43.837 "rw_ios_per_sec": 0, 00:04:43.837 "rw_mbytes_per_sec": 0, 00:04:43.837 "r_mbytes_per_sec": 0, 00:04:43.837 "w_mbytes_per_sec": 0 00:04:43.837 }, 00:04:43.837 "claimed": false, 00:04:43.837 "zoned": false, 00:04:43.837 "supported_io_types": { 00:04:43.837 "read": true, 00:04:43.837 "write": true, 00:04:43.837 "unmap": true, 00:04:43.837 "flush": true, 00:04:43.837 "reset": true, 00:04:43.837 "nvme_admin": false, 00:04:43.837 "nvme_io": false, 00:04:43.837 "nvme_io_md": false, 00:04:43.837 "write_zeroes": true, 00:04:43.837 "zcopy": true, 00:04:43.837 "get_zone_info": false, 00:04:43.837 "zone_management": false, 00:04:43.837 "zone_append": false, 00:04:43.837 "compare": false, 00:04:43.837 "compare_and_write": false, 00:04:43.837 "abort": true, 00:04:43.837 "seek_hole": false, 00:04:43.837 "seek_data": false, 00:04:43.837 "copy": true, 00:04:43.837 "nvme_iov_md": false 00:04:43.837 }, 00:04:43.837 "memory_domains": [ 00:04:43.837 { 00:04:43.837 "dma_device_id": "system", 00:04:43.837 "dma_device_type": 1 00:04:43.837 }, 00:04:43.837 { 00:04:43.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.837 "dma_device_type": 2 00:04:43.837 } 00:04:43.837 ], 00:04:43.837 "driver_specific": { 00:04:43.837 "passthru": { 00:04:43.837 "name": "Passthru0", 00:04:43.837 "base_bdev_name": "Malloc2" 00:04:43.837 } 00:04:43.837 } 00:04:43.837 } 00:04:43.837 ]' 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.837 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.095 00:04:44.095 real 0m0.244s 00:04:44.095 user 0m0.141s 00:04:44.095 sys 0m0.028s 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.095 ************************************ 00:04:44.095 19:02:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.095 END TEST rpc_daemon_integrity 00:04:44.095 ************************************ 00:04:44.095 19:02:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.095 19:02:53 rpc -- rpc/rpc.sh@84 -- # killprocess 57171 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@954 -- # '[' -z 57171 ']' 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@958 -- # kill -0 57171 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57171 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.095 killing process with pid 57171 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57171' 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@973 -- # kill 57171 00:04:44.095 19:02:53 rpc -- common/autotest_common.sh@978 -- # wait 57171 00:04:45.471 00:04:45.471 real 0m3.311s 00:04:45.471 user 0m3.752s 00:04:45.471 sys 0m0.629s 00:04:45.471 19:02:54 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.471 19:02:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.471 ************************************ 00:04:45.471 END TEST rpc 00:04:45.471 ************************************ 00:04:45.471 19:02:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.471 19:02:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.471 19:02:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.471 19:02:54 -- common/autotest_common.sh@10 -- # set +x 00:04:45.471 ************************************ 00:04:45.471 START TEST skip_rpc 00:04:45.471 ************************************ 00:04:45.471 19:02:54 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.471 * Looking for test storage... 00:04:45.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.471 19:02:54 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.471 19:02:54 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.471 19:02:54 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.471 19:02:55 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.471 19:02:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.472 19:02:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.472 19:02:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.472 19:02:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.472 19:02:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.472 19:02:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.472 --rc genhtml_branch_coverage=1 00:04:45.472 --rc genhtml_function_coverage=1 00:04:45.472 --rc genhtml_legend=1 00:04:45.472 --rc geninfo_all_blocks=1 00:04:45.472 --rc geninfo_unexecuted_blocks=1 00:04:45.472 00:04:45.472 ' 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.472 --rc genhtml_branch_coverage=1 00:04:45.472 --rc genhtml_function_coverage=1 00:04:45.472 --rc genhtml_legend=1 00:04:45.472 --rc geninfo_all_blocks=1 00:04:45.472 --rc geninfo_unexecuted_blocks=1 00:04:45.472 00:04:45.472 ' 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.472 --rc genhtml_branch_coverage=1 00:04:45.472 --rc genhtml_function_coverage=1 00:04:45.472 --rc genhtml_legend=1 00:04:45.472 --rc geninfo_all_blocks=1 00:04:45.472 --rc geninfo_unexecuted_blocks=1 00:04:45.472 00:04:45.472 ' 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.472 --rc genhtml_branch_coverage=1 00:04:45.472 --rc genhtml_function_coverage=1 00:04:45.472 --rc genhtml_legend=1 00:04:45.472 --rc geninfo_all_blocks=1 00:04:45.472 --rc geninfo_unexecuted_blocks=1 00:04:45.472 00:04:45.472 ' 00:04:45.472 19:02:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.472 19:02:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.472 19:02:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.472 19:02:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.472 ************************************ 00:04:45.472 START TEST skip_rpc 00:04:45.472 ************************************ 00:04:45.472 19:02:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.472 19:02:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57378 00:04:45.472 19:02:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.472 19:02:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.472 19:02:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.472 [2024-11-27 19:02:55.102260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:45.472 [2024-11-27 19:02:55.102378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57378 ] 00:04:45.732 [2024-11-27 19:02:55.263747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.992 [2024-11-27 19:02:55.371745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57378 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57378 ']' 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57378 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.265 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57378 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57378' 00:04:51.266 killing process with pid 57378 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57378 00:04:51.266 19:03:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57378 00:04:51.836 00:04:51.836 real 0m6.287s 00:04:51.836 user 0m5.848s 00:04:51.836 sys 0m0.332s 00:04:51.836 19:03:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.836 ************************************ 00:04:51.836 END TEST skip_rpc 00:04:51.836 ************************************ 00:04:51.836 19:03:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.836 19:03:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:51.836 19:03:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.836 19:03:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.836 19:03:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.836 ************************************ 00:04:51.836 START TEST skip_rpc_with_json 00:04:51.836 ************************************ 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57471 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57471 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57471 ']' 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.836 19:03:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.836 [2024-11-27 19:03:01.455510] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:51.836 [2024-11-27 19:03:01.455639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:04:52.096 [2024-11-27 19:03:01.618835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.096 [2024-11-27 19:03:01.706684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.663 [2024-11-27 19:03:02.287487] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.663 request: 00:04:52.663 { 00:04:52.663 "trtype": "tcp", 00:04:52.663 "method": "nvmf_get_transports", 00:04:52.663 "req_id": 1 00:04:52.663 } 00:04:52.663 Got JSON-RPC error response 00:04:52.663 response: 00:04:52.663 { 00:04:52.663 "code": -19, 00:04:52.663 "message": "No such device" 00:04:52.663 } 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.663 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.923 [2024-11-27 19:03:02.299573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.923 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.923 { 00:04:52.923 "subsystems": [ 00:04:52.923 { 00:04:52.923 "subsystem": "fsdev", 00:04:52.923 "config": [ 00:04:52.923 { 00:04:52.923 "method": "fsdev_set_opts", 00:04:52.923 "params": { 00:04:52.923 "fsdev_io_pool_size": 65535, 00:04:52.923 "fsdev_io_cache_size": 256 00:04:52.923 } 00:04:52.923 } 00:04:52.923 ] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "keyring", 00:04:52.923 "config": [] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "iobuf", 00:04:52.923 "config": [ 00:04:52.923 { 00:04:52.923 "method": "iobuf_set_options", 00:04:52.923 "params": { 00:04:52.923 "small_pool_count": 8192, 00:04:52.923 "large_pool_count": 1024, 00:04:52.923 "small_bufsize": 8192, 00:04:52.923 "large_bufsize": 135168, 00:04:52.923 "enable_numa": false 00:04:52.923 } 00:04:52.923 } 00:04:52.923 ] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "sock", 00:04:52.923 "config": [ 00:04:52.923 { 00:04:52.923 "method": "sock_set_default_impl", 00:04:52.923 "params": { 00:04:52.923 "impl_name": "posix" 00:04:52.923 } 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "method": "sock_impl_set_options", 00:04:52.923 "params": { 00:04:52.923 "impl_name": "ssl", 00:04:52.923 "recv_buf_size": 4096, 00:04:52.923 "send_buf_size": 4096, 00:04:52.923 "enable_recv_pipe": true, 00:04:52.923 "enable_quickack": false, 00:04:52.923 "enable_placement_id": 0, 00:04:52.923 "enable_zerocopy_send_server": true, 00:04:52.923 "enable_zerocopy_send_client": false, 00:04:52.923 "zerocopy_threshold": 0, 00:04:52.923 "tls_version": 0, 00:04:52.923 "enable_ktls": false 00:04:52.923 } 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "method": "sock_impl_set_options", 00:04:52.923 "params": { 00:04:52.923 "impl_name": "posix", 00:04:52.923 "recv_buf_size": 2097152, 00:04:52.923 "send_buf_size": 2097152, 00:04:52.923 "enable_recv_pipe": true, 00:04:52.923 "enable_quickack": false, 00:04:52.923 "enable_placement_id": 0, 00:04:52.923 "enable_zerocopy_send_server": true, 00:04:52.923 "enable_zerocopy_send_client": false, 00:04:52.923 "zerocopy_threshold": 0, 00:04:52.923 "tls_version": 0, 00:04:52.923 "enable_ktls": false 00:04:52.923 } 00:04:52.923 } 00:04:52.923 ] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "vmd", 00:04:52.923 "config": [] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "accel", 00:04:52.923 "config": [ 00:04:52.923 { 00:04:52.923 "method": "accel_set_options", 00:04:52.923 "params": { 00:04:52.923 "small_cache_size": 128, 00:04:52.923 "large_cache_size": 16, 00:04:52.923 "task_count": 2048, 00:04:52.923 "sequence_count": 2048, 00:04:52.923 "buf_count": 2048 00:04:52.923 } 00:04:52.923 } 00:04:52.923 ] 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "subsystem": "bdev", 00:04:52.923 "config": [ 00:04:52.923 { 00:04:52.923 "method": "bdev_set_options", 00:04:52.923 "params": { 00:04:52.923 "bdev_io_pool_size": 65535, 00:04:52.923 "bdev_io_cache_size": 256, 00:04:52.923 "bdev_auto_examine": true, 00:04:52.923 "iobuf_small_cache_size": 128, 00:04:52.923 "iobuf_large_cache_size": 16 00:04:52.923 } 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "method": "bdev_raid_set_options", 00:04:52.923 "params": { 00:04:52.923 "process_window_size_kb": 1024, 00:04:52.923 "process_max_bandwidth_mb_sec": 0 00:04:52.923 } 00:04:52.923 }, 00:04:52.923 { 00:04:52.923 "method": "bdev_iscsi_set_options", 00:04:52.923 "params": { 00:04:52.923 "timeout_sec": 30 00:04:52.923 } 00:04:52.923 }, 00:04:52.923 { 00:04:52.924 "method": "bdev_nvme_set_options", 00:04:52.924 "params": { 00:04:52.924 "action_on_timeout": "none", 00:04:52.924 "timeout_us": 0, 00:04:52.924 "timeout_admin_us": 0, 00:04:52.924 "keep_alive_timeout_ms": 10000, 00:04:52.924 "arbitration_burst": 0, 00:04:52.924 "low_priority_weight": 0, 00:04:52.924 "medium_priority_weight": 0, 00:04:52.924 "high_priority_weight": 0, 00:04:52.924 "nvme_adminq_poll_period_us": 10000, 00:04:52.924 "nvme_ioq_poll_period_us": 0, 00:04:52.924 "io_queue_requests": 0, 00:04:52.924 "delay_cmd_submit": true, 00:04:52.924 "transport_retry_count": 4, 00:04:52.924 "bdev_retry_count": 3, 00:04:52.924 "transport_ack_timeout": 0, 00:04:52.924 "ctrlr_loss_timeout_sec": 0, 00:04:52.924 "reconnect_delay_sec": 0, 00:04:52.924 "fast_io_fail_timeout_sec": 0, 00:04:52.924 "disable_auto_failback": false, 00:04:52.924 "generate_uuids": false, 00:04:52.924 "transport_tos": 0, 00:04:52.924 "nvme_error_stat": false, 00:04:52.924 "rdma_srq_size": 0, 00:04:52.924 "io_path_stat": false, 00:04:52.924 "allow_accel_sequence": false, 00:04:52.924 "rdma_max_cq_size": 0, 00:04:52.924 "rdma_cm_event_timeout_ms": 0, 00:04:52.924 "dhchap_digests": [ 00:04:52.924 "sha256", 00:04:52.924 "sha384", 00:04:52.924 "sha512" 00:04:52.924 ], 00:04:52.924 "dhchap_dhgroups": [ 00:04:52.924 "null", 00:04:52.924 "ffdhe2048", 00:04:52.924 "ffdhe3072", 00:04:52.924 "ffdhe4096", 00:04:52.924 "ffdhe6144", 00:04:52.924 "ffdhe8192" 00:04:52.924 ] 00:04:52.924 } 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "method": "bdev_nvme_set_hotplug", 00:04:52.924 "params": { 00:04:52.924 "period_us": 100000, 00:04:52.924 "enable": false 00:04:52.924 } 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "method": "bdev_wait_for_examine" 00:04:52.924 } 00:04:52.924 ] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "scsi", 00:04:52.924 "config": null 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "scheduler", 00:04:52.924 "config": [ 00:04:52.924 { 00:04:52.924 "method": "framework_set_scheduler", 00:04:52.924 "params": { 00:04:52.924 "name": "static" 00:04:52.924 } 00:04:52.924 } 00:04:52.924 ] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "vhost_scsi", 00:04:52.924 "config": [] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "vhost_blk", 00:04:52.924 "config": [] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "ublk", 00:04:52.924 "config": [] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "nbd", 00:04:52.924 "config": [] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "nvmf", 00:04:52.924 "config": [ 00:04:52.924 { 00:04:52.924 "method": "nvmf_set_config", 00:04:52.924 "params": { 00:04:52.924 "discovery_filter": "match_any", 00:04:52.924 "admin_cmd_passthru": { 00:04:52.924 "identify_ctrlr": false 00:04:52.924 }, 00:04:52.924 "dhchap_digests": [ 00:04:52.924 "sha256", 00:04:52.924 "sha384", 00:04:52.924 "sha512" 00:04:52.924 ], 00:04:52.924 "dhchap_dhgroups": [ 00:04:52.924 "null", 00:04:52.924 "ffdhe2048", 00:04:52.924 "ffdhe3072", 00:04:52.924 "ffdhe4096", 00:04:52.924 "ffdhe6144", 00:04:52.924 "ffdhe8192" 00:04:52.924 ] 00:04:52.924 } 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "method": "nvmf_set_max_subsystems", 00:04:52.924 "params": { 00:04:52.924 "max_subsystems": 1024 00:04:52.924 } 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "method": "nvmf_set_crdt", 00:04:52.924 "params": { 00:04:52.924 "crdt1": 0, 00:04:52.924 "crdt2": 0, 00:04:52.924 "crdt3": 0 00:04:52.924 } 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "method": "nvmf_create_transport", 00:04:52.924 "params": { 00:04:52.924 "trtype": "TCP", 00:04:52.924 "max_queue_depth": 128, 00:04:52.924 "max_io_qpairs_per_ctrlr": 127, 00:04:52.924 "in_capsule_data_size": 4096, 00:04:52.924 "max_io_size": 131072, 00:04:52.924 "io_unit_size": 131072, 00:04:52.924 "max_aq_depth": 128, 00:04:52.924 "num_shared_buffers": 511, 00:04:52.924 "buf_cache_size": 4294967295, 00:04:52.924 "dif_insert_or_strip": false, 00:04:52.924 "zcopy": false, 00:04:52.924 "c2h_success": true, 00:04:52.924 "sock_priority": 0, 00:04:52.924 "abort_timeout_sec": 1, 00:04:52.924 "ack_timeout": 0, 00:04:52.924 "data_wr_pool_size": 0 00:04:52.924 } 00:04:52.924 } 00:04:52.924 ] 00:04:52.924 }, 00:04:52.924 { 00:04:52.924 "subsystem": "iscsi", 00:04:52.924 "config": [ 00:04:52.924 { 00:04:52.924 "method": "iscsi_set_options", 00:04:52.924 "params": { 00:04:52.924 "node_base": "iqn.2016-06.io.spdk", 00:04:52.924 "max_sessions": 128, 00:04:52.924 "max_connections_per_session": 2, 00:04:52.924 "max_queue_depth": 64, 00:04:52.924 "default_time2wait": 2, 00:04:52.924 "default_time2retain": 20, 00:04:52.924 "first_burst_length": 8192, 00:04:52.924 "immediate_data": true, 00:04:52.924 "allow_duplicated_isid": false, 00:04:52.924 "error_recovery_level": 0, 00:04:52.924 "nop_timeout": 60, 00:04:52.924 "nop_in_interval": 30, 00:04:52.924 "disable_chap": false, 00:04:52.924 "require_chap": false, 00:04:52.924 "mutual_chap": false, 00:04:52.924 "chap_group": 0, 00:04:52.924 "max_large_datain_per_connection": 64, 00:04:52.924 "max_r2t_per_connection": 4, 00:04:52.924 "pdu_pool_size": 36864, 00:04:52.924 "immediate_data_pool_size": 16384, 00:04:52.924 "data_out_pool_size": 2048 00:04:52.924 } 00:04:52.924 } 00:04:52.924 ] 00:04:52.924 } 00:04:52.924 ] 00:04:52.924 } 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57471 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57471 ']' 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57471 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57471 00:04:52.924 killing process with pid 57471 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57471' 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57471 00:04:52.924 19:03:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57471 00:04:54.302 19:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57511 00:04:54.302 19:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.302 19:03:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57511 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57511 ']' 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57511 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57511 00:04:59.614 killing process with pid 57511 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57511' 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57511 00:04:59.614 19:03:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57511 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.549 ************************************ 00:05:00.549 END TEST skip_rpc_with_json 00:05:00.549 ************************************ 00:05:00.549 00:05:00.549 real 0m8.663s 00:05:00.549 user 0m8.181s 00:05:00.549 sys 0m0.707s 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.549 19:03:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.549 19:03:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.549 19:03:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.549 19:03:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.549 ************************************ 00:05:00.549 START TEST skip_rpc_with_delay 00:05:00.549 ************************************ 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.549 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.549 [2024-11-27 19:03:10.175730] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.810 ************************************ 00:05:00.810 END TEST skip_rpc_with_delay 00:05:00.810 ************************************ 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.810 00:05:00.810 real 0m0.122s 00:05:00.810 user 0m0.060s 00:05:00.810 sys 0m0.059s 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.810 19:03:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.810 19:03:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.810 19:03:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.810 19:03:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.810 19:03:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.810 19:03:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.810 19:03:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.810 ************************************ 00:05:00.810 START TEST exit_on_failed_rpc_init 00:05:00.810 ************************************ 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57633 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57633 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57633 ']' 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.810 19:03:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.810 [2024-11-27 19:03:10.365616] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:00.810 [2024-11-27 19:03:10.365900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57633 ] 00:05:01.069 [2024-11-27 19:03:10.517882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.069 [2024-11-27 19:03:10.607445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.636 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.636 [2024-11-27 19:03:11.266713] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:01.636 [2024-11-27 19:03:11.266835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57651 ] 00:05:01.895 [2024-11-27 19:03:11.427103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.895 [2024-11-27 19:03:11.526460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.895 [2024-11-27 19:03:11.526547] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.895 [2024-11-27 19:03:11.526562] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.895 [2024-11-27 19:03:11.526576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57633 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57633 ']' 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57633 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57633 00:05:02.154 killing process with pid 57633 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57633' 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57633 00:05:02.154 19:03:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57633 00:05:03.532 00:05:03.532 real 0m2.702s 00:05:03.532 user 0m2.962s 00:05:03.532 sys 0m0.448s 00:05:03.532 ************************************ 00:05:03.532 END TEST exit_on_failed_rpc_init 00:05:03.532 ************************************ 00:05:03.532 19:03:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.532 19:03:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.532 19:03:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.532 ************************************ 00:05:03.532 END TEST skip_rpc 00:05:03.532 ************************************ 00:05:03.532 00:05:03.532 real 0m18.171s 00:05:03.532 user 0m17.196s 00:05:03.532 sys 0m1.736s 00:05:03.532 19:03:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.532 19:03:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.532 19:03:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.532 19:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.532 19:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.532 19:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:03.532 ************************************ 00:05:03.532 START TEST rpc_client 00:05:03.532 ************************************ 00:05:03.532 19:03:13 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.793 * Looking for test storage... 00:05:03.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.793 19:03:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.793 --rc genhtml_branch_coverage=1 00:05:03.793 --rc genhtml_function_coverage=1 00:05:03.793 --rc genhtml_legend=1 00:05:03.793 --rc geninfo_all_blocks=1 00:05:03.793 --rc geninfo_unexecuted_blocks=1 00:05:03.793 00:05:03.793 ' 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.793 --rc genhtml_branch_coverage=1 00:05:03.793 --rc genhtml_function_coverage=1 00:05:03.793 --rc genhtml_legend=1 00:05:03.793 --rc geninfo_all_blocks=1 00:05:03.793 --rc geninfo_unexecuted_blocks=1 00:05:03.793 00:05:03.793 ' 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.793 --rc genhtml_branch_coverage=1 00:05:03.793 --rc genhtml_function_coverage=1 00:05:03.793 --rc genhtml_legend=1 00:05:03.793 --rc geninfo_all_blocks=1 00:05:03.793 --rc geninfo_unexecuted_blocks=1 00:05:03.793 00:05:03.793 ' 00:05:03.793 19:03:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.793 --rc genhtml_branch_coverage=1 00:05:03.793 --rc genhtml_function_coverage=1 00:05:03.793 --rc genhtml_legend=1 00:05:03.793 --rc geninfo_all_blocks=1 00:05:03.793 --rc geninfo_unexecuted_blocks=1 00:05:03.793 00:05:03.793 ' 00:05:03.793 19:03:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.793 OK 00:05:03.793 19:03:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.794 00:05:03.794 real 0m0.216s 00:05:03.794 user 0m0.109s 00:05:03.794 sys 0m0.106s 00:05:03.794 19:03:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.794 19:03:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.794 ************************************ 00:05:03.794 END TEST rpc_client 00:05:03.794 ************************************ 00:05:03.794 19:03:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.794 19:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.794 19:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.794 19:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:03.794 ************************************ 00:05:03.794 START TEST json_config 00:05:03.794 ************************************ 00:05:03.794 19:03:13 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.055 19:03:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.055 19:03:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.055 19:03:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.055 19:03:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.055 19:03:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.055 19:03:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.055 19:03:13 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.055 19:03:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.055 19:03:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.055 19:03:13 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.055 19:03:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.055 19:03:13 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.055 19:03:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.055 19:03:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.055 19:03:13 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.055 19:03:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.056 --rc genhtml_branch_coverage=1 00:05:04.056 --rc genhtml_function_coverage=1 00:05:04.056 --rc genhtml_legend=1 00:05:04.056 --rc geninfo_all_blocks=1 00:05:04.056 --rc geninfo_unexecuted_blocks=1 00:05:04.056 00:05:04.056 ' 00:05:04.056 19:03:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.056 --rc genhtml_branch_coverage=1 00:05:04.056 --rc genhtml_function_coverage=1 00:05:04.056 --rc genhtml_legend=1 00:05:04.056 --rc geninfo_all_blocks=1 00:05:04.056 --rc geninfo_unexecuted_blocks=1 00:05:04.056 00:05:04.056 ' 00:05:04.056 19:03:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.056 --rc genhtml_branch_coverage=1 00:05:04.056 --rc genhtml_function_coverage=1 00:05:04.056 --rc genhtml_legend=1 00:05:04.056 --rc geninfo_all_blocks=1 00:05:04.056 --rc geninfo_unexecuted_blocks=1 00:05:04.056 00:05:04.056 ' 00:05:04.056 19:03:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.056 --rc genhtml_branch_coverage=1 00:05:04.056 --rc genhtml_function_coverage=1 00:05:04.056 --rc genhtml_legend=1 00:05:04.056 --rc geninfo_all_blocks=1 00:05:04.056 --rc geninfo_unexecuted_blocks=1 00:05:04.056 00:05:04.056 ' 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01172367-e710-474b-807e-39ce49b4e4e4 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=01172367-e710-474b-807e-39ce49b4e4e4 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.056 19:03:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.056 19:03:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.056 19:03:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.056 19:03:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.056 19:03:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.056 19:03:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.056 19:03:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.056 19:03:13 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.056 19:03:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.056 19:03:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:04.056 WARNING: No tests are enabled so not running JSON configuration tests 00:05:04.056 19:03:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:04.056 00:05:04.056 real 0m0.157s 00:05:04.056 user 0m0.095s 00:05:04.056 sys 0m0.061s 00:05:04.056 19:03:13 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.056 19:03:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.056 ************************************ 00:05:04.056 END TEST json_config 00:05:04.056 ************************************ 00:05:04.056 19:03:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.056 19:03:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.056 19:03:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.056 19:03:13 -- common/autotest_common.sh@10 -- # set +x 00:05:04.056 ************************************ 00:05:04.056 START TEST json_config_extra_key 00:05:04.056 ************************************ 00:05:04.056 19:03:13 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.056 19:03:13 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.056 19:03:13 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.056 19:03:13 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.318 19:03:13 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.318 19:03:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:04.319 19:03:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.319 19:03:13 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.319 --rc genhtml_branch_coverage=1 00:05:04.319 --rc genhtml_function_coverage=1 00:05:04.319 --rc genhtml_legend=1 00:05:04.319 --rc geninfo_all_blocks=1 00:05:04.319 --rc geninfo_unexecuted_blocks=1 00:05:04.319 00:05:04.319 ' 00:05:04.319 19:03:13 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.319 --rc genhtml_branch_coverage=1 00:05:04.319 --rc genhtml_function_coverage=1 00:05:04.319 --rc genhtml_legend=1 00:05:04.319 --rc geninfo_all_blocks=1 00:05:04.319 --rc geninfo_unexecuted_blocks=1 00:05:04.319 00:05:04.319 ' 00:05:04.319 19:03:13 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.319 --rc genhtml_branch_coverage=1 00:05:04.319 --rc genhtml_function_coverage=1 00:05:04.319 --rc genhtml_legend=1 00:05:04.319 --rc geninfo_all_blocks=1 00:05:04.319 --rc geninfo_unexecuted_blocks=1 00:05:04.319 00:05:04.319 ' 00:05:04.319 19:03:13 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.319 --rc genhtml_branch_coverage=1 00:05:04.319 --rc genhtml_function_coverage=1 00:05:04.319 --rc genhtml_legend=1 00:05:04.319 --rc geninfo_all_blocks=1 00:05:04.319 --rc geninfo_unexecuted_blocks=1 00:05:04.319 00:05:04.319 ' 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01172367-e710-474b-807e-39ce49b4e4e4 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=01172367-e710-474b-807e-39ce49b4e4e4 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.319 19:03:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.319 19:03:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.319 19:03:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.319 19:03:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.319 19:03:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:04.319 19:03:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.319 19:03:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:04.319 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:04.319 INFO: launching applications... 00:05:04.320 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.320 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:04.320 19:03:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57845 00:05:04.320 Waiting for target to run... 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57845 /var/tmp/spdk_tgt.sock 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57845 ']' 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.320 19:03:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.320 19:03:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:04.320 [2024-11-27 19:03:13.852427] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:04.320 [2024-11-27 19:03:13.852562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57845 ] 00:05:04.578 [2024-11-27 19:03:14.183295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.836 [2024-11-27 19:03:14.276515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.093 19:03:14 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.093 00:05:05.093 INFO: shutting down applications... 00:05:05.093 19:03:14 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:05.093 19:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:05.093 19:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57845 ]] 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57845 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57845 00:05:05.093 19:03:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.659 19:03:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.659 19:03:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.659 19:03:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57845 00:05:05.659 19:03:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.224 19:03:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.224 19:03:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.224 19:03:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57845 00:05:06.224 19:03:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.793 19:03:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.793 19:03:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.794 SPDK target shutdown done 00:05:06.794 19:03:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57845 00:05:06.794 19:03:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.794 19:03:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:06.794 19:03:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.794 19:03:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.794 Success 00:05:06.794 19:03:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.794 00:05:06.794 real 0m2.580s 00:05:06.794 user 0m2.368s 00:05:06.794 sys 0m0.428s 00:05:06.794 ************************************ 00:05:06.794 END TEST json_config_extra_key 00:05:06.794 ************************************ 00:05:06.794 19:03:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.794 19:03:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 19:03:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.794 19:03:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.794 19:03:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.794 19:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 ************************************ 00:05:06.794 START TEST alias_rpc 00:05:06.794 ************************************ 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.794 * Looking for test storage... 00:05:06.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.794 19:03:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.794 --rc genhtml_branch_coverage=1 00:05:06.794 --rc genhtml_function_coverage=1 00:05:06.794 --rc genhtml_legend=1 00:05:06.794 --rc geninfo_all_blocks=1 00:05:06.794 --rc geninfo_unexecuted_blocks=1 00:05:06.794 00:05:06.794 ' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.794 --rc genhtml_branch_coverage=1 00:05:06.794 --rc genhtml_function_coverage=1 00:05:06.794 --rc genhtml_legend=1 00:05:06.794 --rc geninfo_all_blocks=1 00:05:06.794 --rc geninfo_unexecuted_blocks=1 00:05:06.794 00:05:06.794 ' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.794 --rc genhtml_branch_coverage=1 00:05:06.794 --rc genhtml_function_coverage=1 00:05:06.794 --rc genhtml_legend=1 00:05:06.794 --rc geninfo_all_blocks=1 00:05:06.794 --rc geninfo_unexecuted_blocks=1 00:05:06.794 00:05:06.794 ' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.794 --rc genhtml_branch_coverage=1 00:05:06.794 --rc genhtml_function_coverage=1 00:05:06.794 --rc genhtml_legend=1 00:05:06.794 --rc geninfo_all_blocks=1 00:05:06.794 --rc geninfo_unexecuted_blocks=1 00:05:06.794 00:05:06.794 ' 00:05:06.794 19:03:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.794 19:03:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57931 00:05:06.794 19:03:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57931 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57931 ']' 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.794 19:03:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 19:03:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.052 [2024-11-27 19:03:16.488642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:07.052 [2024-11-27 19:03:16.488787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57931 ] 00:05:07.052 [2024-11-27 19:03:16.647901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.309 [2024-11-27 19:03:16.744793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.876 19:03:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.876 19:03:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.876 19:03:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:08.134 19:03:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57931 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57931 ']' 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57931 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57931 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.134 killing process with pid 57931 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57931' 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 57931 00:05:08.134 19:03:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 57931 00:05:09.511 00:05:09.511 real 0m2.582s 00:05:09.511 user 0m2.629s 00:05:09.511 sys 0m0.468s 00:05:09.511 ************************************ 00:05:09.511 END TEST alias_rpc 00:05:09.511 ************************************ 00:05:09.511 19:03:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.511 19:03:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.511 19:03:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.511 19:03:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.511 19:03:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.511 19:03:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.511 19:03:18 -- common/autotest_common.sh@10 -- # set +x 00:05:09.511 ************************************ 00:05:09.511 START TEST spdkcli_tcp 00:05:09.511 ************************************ 00:05:09.511 19:03:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.511 * Looking for test storage... 00:05:09.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:09.511 19:03:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.511 19:03:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.511 19:03:18 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.511 19:03:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.511 --rc genhtml_branch_coverage=1 00:05:09.511 --rc genhtml_function_coverage=1 00:05:09.511 --rc genhtml_legend=1 00:05:09.511 --rc geninfo_all_blocks=1 00:05:09.511 --rc geninfo_unexecuted_blocks=1 00:05:09.511 00:05:09.511 ' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.511 --rc genhtml_branch_coverage=1 00:05:09.511 --rc genhtml_function_coverage=1 00:05:09.511 --rc genhtml_legend=1 00:05:09.511 --rc geninfo_all_blocks=1 00:05:09.511 --rc geninfo_unexecuted_blocks=1 00:05:09.511 00:05:09.511 ' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.511 --rc genhtml_branch_coverage=1 00:05:09.511 --rc genhtml_function_coverage=1 00:05:09.511 --rc genhtml_legend=1 00:05:09.511 --rc geninfo_all_blocks=1 00:05:09.511 --rc geninfo_unexecuted_blocks=1 00:05:09.511 00:05:09.511 ' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.511 --rc genhtml_branch_coverage=1 00:05:09.511 --rc genhtml_function_coverage=1 00:05:09.511 --rc genhtml_legend=1 00:05:09.511 --rc geninfo_all_blocks=1 00:05:09.511 --rc geninfo_unexecuted_blocks=1 00:05:09.511 00:05:09.511 ' 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58022 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58022 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58022 ']' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.511 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.511 19:03:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.511 [2024-11-27 19:03:19.111195] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:09.511 [2024-11-27 19:03:19.111316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:05:09.769 [2024-11-27 19:03:19.266240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.769 [2024-11-27 19:03:19.359846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.769 [2024-11-27 19:03:19.359899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.333 19:03:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.333 19:03:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:10.333 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58039 00:05:10.333 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.333 19:03:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.593 [ 00:05:10.593 "bdev_malloc_delete", 00:05:10.593 "bdev_malloc_create", 00:05:10.593 "bdev_null_resize", 00:05:10.593 "bdev_null_delete", 00:05:10.593 "bdev_null_create", 00:05:10.593 "bdev_nvme_cuse_unregister", 00:05:10.593 "bdev_nvme_cuse_register", 00:05:10.593 "bdev_opal_new_user", 00:05:10.593 "bdev_opal_set_lock_state", 00:05:10.593 "bdev_opal_delete", 00:05:10.593 "bdev_opal_get_info", 00:05:10.593 "bdev_opal_create", 00:05:10.593 "bdev_nvme_opal_revert", 00:05:10.593 "bdev_nvme_opal_init", 00:05:10.593 "bdev_nvme_send_cmd", 00:05:10.593 "bdev_nvme_set_keys", 00:05:10.593 "bdev_nvme_get_path_iostat", 00:05:10.593 "bdev_nvme_get_mdns_discovery_info", 00:05:10.593 "bdev_nvme_stop_mdns_discovery", 00:05:10.593 "bdev_nvme_start_mdns_discovery", 00:05:10.593 "bdev_nvme_set_multipath_policy", 00:05:10.593 "bdev_nvme_set_preferred_path", 00:05:10.593 "bdev_nvme_get_io_paths", 00:05:10.593 "bdev_nvme_remove_error_injection", 00:05:10.593 "bdev_nvme_add_error_injection", 00:05:10.593 "bdev_nvme_get_discovery_info", 00:05:10.593 "bdev_nvme_stop_discovery", 00:05:10.593 "bdev_nvme_start_discovery", 00:05:10.593 "bdev_nvme_get_controller_health_info", 00:05:10.593 "bdev_nvme_disable_controller", 00:05:10.593 "bdev_nvme_enable_controller", 00:05:10.593 "bdev_nvme_reset_controller", 00:05:10.593 "bdev_nvme_get_transport_statistics", 00:05:10.593 "bdev_nvme_apply_firmware", 00:05:10.593 "bdev_nvme_detach_controller", 00:05:10.593 "bdev_nvme_get_controllers", 00:05:10.593 "bdev_nvme_attach_controller", 00:05:10.593 "bdev_nvme_set_hotplug", 00:05:10.593 "bdev_nvme_set_options", 00:05:10.593 "bdev_passthru_delete", 00:05:10.593 "bdev_passthru_create", 00:05:10.593 "bdev_lvol_set_parent_bdev", 00:05:10.593 "bdev_lvol_set_parent", 00:05:10.593 "bdev_lvol_check_shallow_copy", 00:05:10.593 "bdev_lvol_start_shallow_copy", 00:05:10.593 "bdev_lvol_grow_lvstore", 00:05:10.593 "bdev_lvol_get_lvols", 00:05:10.593 "bdev_lvol_get_lvstores", 00:05:10.593 "bdev_lvol_delete", 00:05:10.593 "bdev_lvol_set_read_only", 00:05:10.593 "bdev_lvol_resize", 00:05:10.593 "bdev_lvol_decouple_parent", 00:05:10.593 "bdev_lvol_inflate", 00:05:10.593 "bdev_lvol_rename", 00:05:10.593 "bdev_lvol_clone_bdev", 00:05:10.593 "bdev_lvol_clone", 00:05:10.593 "bdev_lvol_snapshot", 00:05:10.593 "bdev_lvol_create", 00:05:10.593 "bdev_lvol_delete_lvstore", 00:05:10.593 "bdev_lvol_rename_lvstore", 00:05:10.593 "bdev_lvol_create_lvstore", 00:05:10.593 "bdev_raid_set_options", 00:05:10.593 "bdev_raid_remove_base_bdev", 00:05:10.593 "bdev_raid_add_base_bdev", 00:05:10.593 "bdev_raid_delete", 00:05:10.593 "bdev_raid_create", 00:05:10.593 "bdev_raid_get_bdevs", 00:05:10.593 "bdev_error_inject_error", 00:05:10.593 "bdev_error_delete", 00:05:10.593 "bdev_error_create", 00:05:10.593 "bdev_split_delete", 00:05:10.593 "bdev_split_create", 00:05:10.593 "bdev_delay_delete", 00:05:10.593 "bdev_delay_create", 00:05:10.593 "bdev_delay_update_latency", 00:05:10.593 "bdev_zone_block_delete", 00:05:10.593 "bdev_zone_block_create", 00:05:10.593 "blobfs_create", 00:05:10.593 "blobfs_detect", 00:05:10.593 "blobfs_set_cache_size", 00:05:10.593 "bdev_xnvme_delete", 00:05:10.593 "bdev_xnvme_create", 00:05:10.593 "bdev_aio_delete", 00:05:10.593 "bdev_aio_rescan", 00:05:10.593 "bdev_aio_create", 00:05:10.593 "bdev_ftl_set_property", 00:05:10.593 "bdev_ftl_get_properties", 00:05:10.593 "bdev_ftl_get_stats", 00:05:10.593 "bdev_ftl_unmap", 00:05:10.593 "bdev_ftl_unload", 00:05:10.593 "bdev_ftl_delete", 00:05:10.593 "bdev_ftl_load", 00:05:10.593 "bdev_ftl_create", 00:05:10.593 "bdev_virtio_attach_controller", 00:05:10.593 "bdev_virtio_scsi_get_devices", 00:05:10.593 "bdev_virtio_detach_controller", 00:05:10.593 "bdev_virtio_blk_set_hotplug", 00:05:10.593 "bdev_iscsi_delete", 00:05:10.593 "bdev_iscsi_create", 00:05:10.593 "bdev_iscsi_set_options", 00:05:10.593 "accel_error_inject_error", 00:05:10.593 "ioat_scan_accel_module", 00:05:10.593 "dsa_scan_accel_module", 00:05:10.593 "iaa_scan_accel_module", 00:05:10.593 "keyring_file_remove_key", 00:05:10.593 "keyring_file_add_key", 00:05:10.593 "keyring_linux_set_options", 00:05:10.593 "fsdev_aio_delete", 00:05:10.593 "fsdev_aio_create", 00:05:10.593 "iscsi_get_histogram", 00:05:10.593 "iscsi_enable_histogram", 00:05:10.593 "iscsi_set_options", 00:05:10.593 "iscsi_get_auth_groups", 00:05:10.593 "iscsi_auth_group_remove_secret", 00:05:10.593 "iscsi_auth_group_add_secret", 00:05:10.593 "iscsi_delete_auth_group", 00:05:10.593 "iscsi_create_auth_group", 00:05:10.593 "iscsi_set_discovery_auth", 00:05:10.593 "iscsi_get_options", 00:05:10.593 "iscsi_target_node_request_logout", 00:05:10.593 "iscsi_target_node_set_redirect", 00:05:10.593 "iscsi_target_node_set_auth", 00:05:10.593 "iscsi_target_node_add_lun", 00:05:10.593 "iscsi_get_stats", 00:05:10.593 "iscsi_get_connections", 00:05:10.593 "iscsi_portal_group_set_auth", 00:05:10.593 "iscsi_start_portal_group", 00:05:10.593 "iscsi_delete_portal_group", 00:05:10.593 "iscsi_create_portal_group", 00:05:10.593 "iscsi_get_portal_groups", 00:05:10.593 "iscsi_delete_target_node", 00:05:10.593 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.593 "iscsi_target_node_add_pg_ig_maps", 00:05:10.593 "iscsi_create_target_node", 00:05:10.593 "iscsi_get_target_nodes", 00:05:10.593 "iscsi_delete_initiator_group", 00:05:10.593 "iscsi_initiator_group_remove_initiators", 00:05:10.593 "iscsi_initiator_group_add_initiators", 00:05:10.593 "iscsi_create_initiator_group", 00:05:10.593 "iscsi_get_initiator_groups", 00:05:10.593 "nvmf_set_crdt", 00:05:10.593 "nvmf_set_config", 00:05:10.593 "nvmf_set_max_subsystems", 00:05:10.593 "nvmf_stop_mdns_prr", 00:05:10.593 "nvmf_publish_mdns_prr", 00:05:10.593 "nvmf_subsystem_get_listeners", 00:05:10.593 "nvmf_subsystem_get_qpairs", 00:05:10.593 "nvmf_subsystem_get_controllers", 00:05:10.593 "nvmf_get_stats", 00:05:10.593 "nvmf_get_transports", 00:05:10.593 "nvmf_create_transport", 00:05:10.593 "nvmf_get_targets", 00:05:10.593 "nvmf_delete_target", 00:05:10.593 "nvmf_create_target", 00:05:10.593 "nvmf_subsystem_allow_any_host", 00:05:10.593 "nvmf_subsystem_set_keys", 00:05:10.593 "nvmf_subsystem_remove_host", 00:05:10.593 "nvmf_subsystem_add_host", 00:05:10.593 "nvmf_ns_remove_host", 00:05:10.593 "nvmf_ns_add_host", 00:05:10.593 "nvmf_subsystem_remove_ns", 00:05:10.593 "nvmf_subsystem_set_ns_ana_group", 00:05:10.593 "nvmf_subsystem_add_ns", 00:05:10.593 "nvmf_subsystem_listener_set_ana_state", 00:05:10.593 "nvmf_discovery_get_referrals", 00:05:10.593 "nvmf_discovery_remove_referral", 00:05:10.593 "nvmf_discovery_add_referral", 00:05:10.593 "nvmf_subsystem_remove_listener", 00:05:10.593 "nvmf_subsystem_add_listener", 00:05:10.593 "nvmf_delete_subsystem", 00:05:10.593 "nvmf_create_subsystem", 00:05:10.593 "nvmf_get_subsystems", 00:05:10.593 "env_dpdk_get_mem_stats", 00:05:10.593 "nbd_get_disks", 00:05:10.593 "nbd_stop_disk", 00:05:10.593 "nbd_start_disk", 00:05:10.593 "ublk_recover_disk", 00:05:10.593 "ublk_get_disks", 00:05:10.593 "ublk_stop_disk", 00:05:10.593 "ublk_start_disk", 00:05:10.593 "ublk_destroy_target", 00:05:10.593 "ublk_create_target", 00:05:10.593 "virtio_blk_create_transport", 00:05:10.593 "virtio_blk_get_transports", 00:05:10.593 "vhost_controller_set_coalescing", 00:05:10.593 "vhost_get_controllers", 00:05:10.593 "vhost_delete_controller", 00:05:10.593 "vhost_create_blk_controller", 00:05:10.593 "vhost_scsi_controller_remove_target", 00:05:10.593 "vhost_scsi_controller_add_target", 00:05:10.593 "vhost_start_scsi_controller", 00:05:10.593 "vhost_create_scsi_controller", 00:05:10.593 "thread_set_cpumask", 00:05:10.593 "scheduler_set_options", 00:05:10.593 "framework_get_governor", 00:05:10.593 "framework_get_scheduler", 00:05:10.593 "framework_set_scheduler", 00:05:10.593 "framework_get_reactors", 00:05:10.593 "thread_get_io_channels", 00:05:10.593 "thread_get_pollers", 00:05:10.593 "thread_get_stats", 00:05:10.594 "framework_monitor_context_switch", 00:05:10.594 "spdk_kill_instance", 00:05:10.594 "log_enable_timestamps", 00:05:10.594 "log_get_flags", 00:05:10.594 "log_clear_flag", 00:05:10.594 "log_set_flag", 00:05:10.594 "log_get_level", 00:05:10.594 "log_set_level", 00:05:10.594 "log_get_print_level", 00:05:10.594 "log_set_print_level", 00:05:10.594 "framework_enable_cpumask_locks", 00:05:10.594 "framework_disable_cpumask_locks", 00:05:10.594 "framework_wait_init", 00:05:10.594 "framework_start_init", 00:05:10.594 "scsi_get_devices", 00:05:10.594 "bdev_get_histogram", 00:05:10.594 "bdev_enable_histogram", 00:05:10.594 "bdev_set_qos_limit", 00:05:10.594 "bdev_set_qd_sampling_period", 00:05:10.594 "bdev_get_bdevs", 00:05:10.594 "bdev_reset_iostat", 00:05:10.594 "bdev_get_iostat", 00:05:10.594 "bdev_examine", 00:05:10.594 "bdev_wait_for_examine", 00:05:10.594 "bdev_set_options", 00:05:10.594 "accel_get_stats", 00:05:10.594 "accel_set_options", 00:05:10.594 "accel_set_driver", 00:05:10.594 "accel_crypto_key_destroy", 00:05:10.594 "accel_crypto_keys_get", 00:05:10.594 "accel_crypto_key_create", 00:05:10.594 "accel_assign_opc", 00:05:10.594 "accel_get_module_info", 00:05:10.594 "accel_get_opc_assignments", 00:05:10.594 "vmd_rescan", 00:05:10.594 "vmd_remove_device", 00:05:10.594 "vmd_enable", 00:05:10.594 "sock_get_default_impl", 00:05:10.594 "sock_set_default_impl", 00:05:10.594 "sock_impl_set_options", 00:05:10.594 "sock_impl_get_options", 00:05:10.594 "iobuf_get_stats", 00:05:10.594 "iobuf_set_options", 00:05:10.594 "keyring_get_keys", 00:05:10.594 "framework_get_pci_devices", 00:05:10.594 "framework_get_config", 00:05:10.594 "framework_get_subsystems", 00:05:10.594 "fsdev_set_opts", 00:05:10.594 "fsdev_get_opts", 00:05:10.594 "trace_get_info", 00:05:10.594 "trace_get_tpoint_group_mask", 00:05:10.594 "trace_disable_tpoint_group", 00:05:10.594 "trace_enable_tpoint_group", 00:05:10.594 "trace_clear_tpoint_mask", 00:05:10.594 "trace_set_tpoint_mask", 00:05:10.594 "notify_get_notifications", 00:05:10.594 "notify_get_types", 00:05:10.594 "spdk_get_version", 00:05:10.594 "rpc_get_methods" 00:05:10.594 ] 00:05:10.594 19:03:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.594 19:03:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.594 19:03:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58022 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58022 ']' 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58022 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58022 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.594 killing process with pid 58022 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58022' 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58022 00:05:10.594 19:03:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58022 00:05:11.995 00:05:11.995 real 0m2.579s 00:05:11.995 user 0m4.564s 00:05:11.995 sys 0m0.481s 00:05:11.995 ************************************ 00:05:11.995 END TEST spdkcli_tcp 00:05:11.995 ************************************ 00:05:11.995 19:03:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.995 19:03:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 19:03:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.995 19:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.995 19:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.995 19:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 ************************************ 00:05:11.995 START TEST dpdk_mem_utility 00:05:11.995 ************************************ 00:05:11.995 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.995 * Looking for test storage... 00:05:11.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:11.995 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.995 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.995 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.995 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:11.995 19:03:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.254 19:03:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.254 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.254 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.254 --rc genhtml_branch_coverage=1 00:05:12.254 --rc genhtml_function_coverage=1 00:05:12.254 --rc genhtml_legend=1 00:05:12.254 --rc geninfo_all_blocks=1 00:05:12.254 --rc geninfo_unexecuted_blocks=1 00:05:12.254 00:05:12.254 ' 00:05:12.254 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.254 --rc genhtml_branch_coverage=1 00:05:12.254 --rc genhtml_function_coverage=1 00:05:12.254 --rc genhtml_legend=1 00:05:12.254 --rc geninfo_all_blocks=1 00:05:12.254 --rc geninfo_unexecuted_blocks=1 00:05:12.254 00:05:12.254 ' 00:05:12.254 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.254 --rc genhtml_branch_coverage=1 00:05:12.255 --rc genhtml_function_coverage=1 00:05:12.255 --rc genhtml_legend=1 00:05:12.255 --rc geninfo_all_blocks=1 00:05:12.255 --rc geninfo_unexecuted_blocks=1 00:05:12.255 00:05:12.255 ' 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.255 --rc genhtml_branch_coverage=1 00:05:12.255 --rc genhtml_function_coverage=1 00:05:12.255 --rc genhtml_legend=1 00:05:12.255 --rc geninfo_all_blocks=1 00:05:12.255 --rc geninfo_unexecuted_blocks=1 00:05:12.255 00:05:12.255 ' 00:05:12.255 19:03:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.255 19:03:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58127 00:05:12.255 19:03:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58127 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58127 ']' 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.255 19:03:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.255 19:03:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.255 [2024-11-27 19:03:21.708668] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:12.255 [2024-11-27 19:03:21.708785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58127 ] 00:05:12.255 [2024-11-27 19:03:21.863295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.513 [2024-11-27 19:03:21.952366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.081 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.081 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:13.081 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.081 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.081 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.082 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.082 { 00:05:13.082 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.082 } 00:05:13.082 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.082 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.082 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:13.082 1 heaps totaling size 824.000000 MiB 00:05:13.082 size: 824.000000 MiB heap id: 0 00:05:13.082 end heaps---------- 00:05:13.082 9 mempools totaling size 603.782043 MiB 00:05:13.082 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.082 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.082 size: 100.555481 MiB name: bdev_io_58127 00:05:13.082 size: 50.003479 MiB name: msgpool_58127 00:05:13.082 size: 36.509338 MiB name: fsdev_io_58127 00:05:13.082 size: 21.763794 MiB name: PDU_Pool 00:05:13.082 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.082 size: 4.133484 MiB name: evtpool_58127 00:05:13.082 size: 0.026123 MiB name: Session_Pool 00:05:13.082 end mempools------- 00:05:13.082 6 memzones totaling size 4.142822 MiB 00:05:13.082 size: 1.000366 MiB name: RG_ring_0_58127 00:05:13.082 size: 1.000366 MiB name: RG_ring_1_58127 00:05:13.082 size: 1.000366 MiB name: RG_ring_4_58127 00:05:13.082 size: 1.000366 MiB name: RG_ring_5_58127 00:05:13.082 size: 0.125366 MiB name: RG_ring_2_58127 00:05:13.082 size: 0.015991 MiB name: RG_ring_3_58127 00:05:13.082 end memzones------- 00:05:13.082 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.082 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:05:13.082 list of free elements. size: 16.780884 MiB 00:05:13.082 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:13.082 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:13.082 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:13.082 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:13.082 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:13.082 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:13.082 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:13.082 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:13.082 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:13.082 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:13.082 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:13.082 element at address: 0x20001b400000 with size: 0.560486 MiB 00:05:13.082 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:13.082 element at address: 0x200019600000 with size: 0.488464 MiB 00:05:13.082 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:13.082 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:13.082 element at address: 0x200028800000 with size: 0.390686 MiB 00:05:13.082 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:13.082 list of standard malloc elements. size: 199.288208 MiB 00:05:13.082 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:13.082 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:13.082 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:13.082 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:13.082 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:13.082 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.082 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:13.082 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.082 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:13.082 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:13.082 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:13.082 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:13.082 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:13.082 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:13.083 element at address: 0x200028864140 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:13.083 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:13.084 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:13.084 list of memzone associated elements. size: 607.930908 MiB 00:05:13.084 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:13.084 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.084 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:13.084 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.084 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:13.084 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58127_0 00:05:13.084 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:13.084 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58127_0 00:05:13.084 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:13.084 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58127_0 00:05:13.084 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:13.084 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.084 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:13.084 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.084 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:13.084 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58127_0 00:05:13.084 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:13.084 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58127 00:05:13.084 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.084 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58127 00:05:13.084 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:13.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.084 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:13.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.084 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:13.084 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.084 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:13.084 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.084 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:13.084 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58127 00:05:13.084 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:13.084 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58127 00:05:13.084 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:13.084 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58127 00:05:13.084 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:13.084 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58127 00:05:13.084 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:13.084 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58127 00:05:13.084 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:13.084 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58127 00:05:13.084 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:13.084 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.084 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:13.084 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.084 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:13.084 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.084 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:13.084 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58127 00:05:13.084 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:13.084 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58127 00:05:13.084 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:13.084 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.084 element at address: 0x200028864240 with size: 0.023804 MiB 00:05:13.084 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.084 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:13.084 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58127 00:05:13.084 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:05:13.084 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.084 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:13.084 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58127 00:05:13.084 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:13.084 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58127 00:05:13.084 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:13.084 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58127 00:05:13.084 element at address: 0x20002886af00 with size: 0.000366 MiB 00:05:13.084 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.084 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.084 19:03:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58127 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58127 ']' 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58127 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58127 00:05:13.084 killing process with pid 58127 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58127' 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58127 00:05:13.084 19:03:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58127 00:05:14.463 ************************************ 00:05:14.463 END TEST dpdk_mem_utility 00:05:14.463 ************************************ 00:05:14.463 00:05:14.463 real 0m2.407s 00:05:14.463 user 0m2.408s 00:05:14.463 sys 0m0.388s 00:05:14.463 19:03:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.463 19:03:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.463 19:03:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.463 19:03:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.463 19:03:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.463 19:03:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.463 ************************************ 00:05:14.463 START TEST event 00:05:14.463 ************************************ 00:05:14.463 19:03:23 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.463 * Looking for test storage... 00:05:14.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.463 19:03:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.463 19:03:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.463 19:03:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.463 19:03:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.463 19:03:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.463 19:03:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.463 19:03:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.463 19:03:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.463 19:03:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.463 19:03:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.463 19:03:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.463 19:03:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:14.463 19:03:24 event -- scripts/common.sh@345 -- # : 1 00:05:14.463 19:03:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.463 19:03:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.463 19:03:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:14.463 19:03:24 event -- scripts/common.sh@353 -- # local d=1 00:05:14.463 19:03:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.463 19:03:24 event -- scripts/common.sh@355 -- # echo 1 00:05:14.463 19:03:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.463 19:03:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:14.463 19:03:24 event -- scripts/common.sh@353 -- # local d=2 00:05:14.463 19:03:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.463 19:03:24 event -- scripts/common.sh@355 -- # echo 2 00:05:14.463 19:03:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.463 19:03:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.463 19:03:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.463 19:03:24 event -- scripts/common.sh@368 -- # return 0 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.463 --rc genhtml_branch_coverage=1 00:05:14.463 --rc genhtml_function_coverage=1 00:05:14.463 --rc genhtml_legend=1 00:05:14.463 --rc geninfo_all_blocks=1 00:05:14.463 --rc geninfo_unexecuted_blocks=1 00:05:14.463 00:05:14.463 ' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.463 --rc genhtml_branch_coverage=1 00:05:14.463 --rc genhtml_function_coverage=1 00:05:14.463 --rc genhtml_legend=1 00:05:14.463 --rc geninfo_all_blocks=1 00:05:14.463 --rc geninfo_unexecuted_blocks=1 00:05:14.463 00:05:14.463 ' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.463 --rc genhtml_branch_coverage=1 00:05:14.463 --rc genhtml_function_coverage=1 00:05:14.463 --rc genhtml_legend=1 00:05:14.463 --rc geninfo_all_blocks=1 00:05:14.463 --rc geninfo_unexecuted_blocks=1 00:05:14.463 00:05:14.463 ' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.463 --rc genhtml_branch_coverage=1 00:05:14.463 --rc genhtml_function_coverage=1 00:05:14.463 --rc genhtml_legend=1 00:05:14.463 --rc geninfo_all_blocks=1 00:05:14.463 --rc geninfo_unexecuted_blocks=1 00:05:14.463 00:05:14.463 ' 00:05:14.463 19:03:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:14.463 19:03:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.463 19:03:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:14.463 19:03:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.463 19:03:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.463 ************************************ 00:05:14.463 START TEST event_perf 00:05:14.463 ************************************ 00:05:14.463 19:03:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.722 Running I/O for 1 seconds...[2024-11-27 19:03:24.123929] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:14.722 [2024-11-27 19:03:24.124111] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58219 ] 00:05:14.722 [2024-11-27 19:03:24.279351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.980 [2024-11-27 19:03:24.381724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.980 [2024-11-27 19:03:24.382144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.980 [2024-11-27 19:03:24.382180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.980 Running I/O for 1 seconds...[2024-11-27 19:03:24.381899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.915 00:05:15.915 lcore 0: 161902 00:05:15.915 lcore 1: 161904 00:05:15.915 lcore 2: 161905 00:05:15.915 lcore 3: 161907 00:05:15.915 done. 00:05:15.915 00:05:15.915 real 0m1.429s 00:05:15.915 user 0m4.226s 00:05:15.915 sys 0m0.082s 00:05:15.915 19:03:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.915 ************************************ 00:05:15.915 END TEST event_perf 00:05:15.915 ************************************ 00:05:15.915 19:03:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 19:03:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.173 19:03:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.173 19:03:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.173 19:03:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 ************************************ 00:05:16.173 START TEST event_reactor 00:05:16.173 ************************************ 00:05:16.173 19:03:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.173 [2024-11-27 19:03:25.599495] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:16.173 [2024-11-27 19:03:25.599719] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58253 ] 00:05:16.173 [2024-11-27 19:03:25.754792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.431 [2024-11-27 19:03:25.840625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.366 test_start 00:05:17.366 oneshot 00:05:17.366 tick 100 00:05:17.366 tick 100 00:05:17.366 tick 250 00:05:17.366 tick 100 00:05:17.366 tick 100 00:05:17.366 tick 100 00:05:17.366 tick 250 00:05:17.366 tick 500 00:05:17.366 tick 100 00:05:17.366 tick 100 00:05:17.366 tick 250 00:05:17.366 tick 100 00:05:17.366 tick 100 00:05:17.366 test_end 00:05:17.366 00:05:17.366 real 0m1.395s 00:05:17.366 user 0m1.223s 00:05:17.366 sys 0m0.066s 00:05:17.366 19:03:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.366 19:03:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:17.366 ************************************ 00:05:17.366 END TEST event_reactor 00:05:17.366 ************************************ 00:05:17.366 19:03:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.366 19:03:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:17.366 19:03:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.366 19:03:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.625 ************************************ 00:05:17.625 START TEST event_reactor_perf 00:05:17.625 ************************************ 00:05:17.625 19:03:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.625 [2024-11-27 19:03:27.034021] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:17.625 [2024-11-27 19:03:27.034241] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:05:17.625 [2024-11-27 19:03:27.187509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.883 [2024-11-27 19:03:27.272517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.816 test_start 00:05:18.816 test_end 00:05:18.816 Performance: 419719 events per second 00:05:18.816 00:05:18.816 real 0m1.393s 00:05:18.816 user 0m1.224s 00:05:18.816 sys 0m0.062s 00:05:18.816 19:03:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.816 ************************************ 00:05:18.816 END TEST event_reactor_perf 00:05:18.816 ************************************ 00:05:18.816 19:03:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.816 19:03:28 event -- event/event.sh@49 -- # uname -s 00:05:18.816 19:03:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.816 19:03:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.816 19:03:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.816 19:03:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.816 19:03:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.816 ************************************ 00:05:18.816 START TEST event_scheduler 00:05:18.816 ************************************ 00:05:18.816 19:03:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.073 * Looking for test storage... 00:05:19.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:19.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.073 19:03:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.073 --rc genhtml_branch_coverage=1 00:05:19.073 --rc genhtml_function_coverage=1 00:05:19.073 --rc genhtml_legend=1 00:05:19.073 --rc geninfo_all_blocks=1 00:05:19.073 --rc geninfo_unexecuted_blocks=1 00:05:19.073 00:05:19.073 ' 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.073 --rc genhtml_branch_coverage=1 00:05:19.073 --rc genhtml_function_coverage=1 00:05:19.073 --rc genhtml_legend=1 00:05:19.073 --rc geninfo_all_blocks=1 00:05:19.073 --rc geninfo_unexecuted_blocks=1 00:05:19.073 00:05:19.073 ' 00:05:19.073 19:03:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.074 --rc genhtml_branch_coverage=1 00:05:19.074 --rc genhtml_function_coverage=1 00:05:19.074 --rc genhtml_legend=1 00:05:19.074 --rc geninfo_all_blocks=1 00:05:19.074 --rc geninfo_unexecuted_blocks=1 00:05:19.074 00:05:19.074 ' 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.074 --rc genhtml_branch_coverage=1 00:05:19.074 --rc genhtml_function_coverage=1 00:05:19.074 --rc genhtml_legend=1 00:05:19.074 --rc geninfo_all_blocks=1 00:05:19.074 --rc geninfo_unexecuted_blocks=1 00:05:19.074 00:05:19.074 ' 00:05:19.074 19:03:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.074 19:03:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58360 00:05:19.074 19:03:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.074 19:03:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58360 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58360 ']' 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.074 19:03:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.074 19:03:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.074 [2024-11-27 19:03:28.649875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:19.074 [2024-11-27 19:03:28.649998] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58360 ] 00:05:19.331 [2024-11-27 19:03:28.809334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.331 [2024-11-27 19:03:28.913738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.331 [2024-11-27 19:03:28.913971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.331 [2024-11-27 19:03:28.914228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.331 [2024-11-27 19:03:28.914237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:19.950 19:03:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.950 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.950 POWER: Cannot set governor of lcore 0 to performance 00:05:19.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.950 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.950 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.950 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:19.950 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:19.950 POWER: Unable to set Power Management Environment for lcore 0 00:05:19.950 [2024-11-27 19:03:29.487676] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:19.950 [2024-11-27 19:03:29.487699] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:19.950 [2024-11-27 19:03:29.487709] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.950 [2024-11-27 19:03:29.487725] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.950 [2024-11-27 19:03:29.487733] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.950 [2024-11-27 19:03:29.487742] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.950 19:03:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.950 19:03:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.210 [2024-11-27 19:03:29.728071] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.210 19:03:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.210 19:03:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.210 19:03:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.210 19:03:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.210 19:03:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.210 ************************************ 00:05:20.210 START TEST scheduler_create_thread 00:05:20.210 ************************************ 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.210 2 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.210 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 3 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 4 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 5 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 6 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 7 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 8 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 9 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 10 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.211 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.470 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.470 19:03:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.470 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.470 19:03:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.842 19:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.842 19:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.842 19:03:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.842 19:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.842 19:03:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.776 19:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.776 00:05:22.776 real 0m2.616s 00:05:22.776 user 0m0.017s 00:05:22.776 sys 0m0.006s 00:05:22.776 19:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.777 19:03:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.777 ************************************ 00:05:22.777 END TEST scheduler_create_thread 00:05:22.777 ************************************ 00:05:22.777 19:03:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.777 19:03:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58360 00:05:22.777 19:03:32 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58360 ']' 00:05:22.777 19:03:32 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58360 00:05:22.777 19:03:32 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:22.777 19:03:32 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.777 19:03:32 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58360 00:05:23.034 19:03:32 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:23.034 19:03:32 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:23.034 killing process with pid 58360 00:05:23.035 19:03:32 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58360' 00:05:23.035 19:03:32 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58360 00:05:23.035 19:03:32 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58360 00:05:23.293 [2024-11-27 19:03:32.839074] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.862 00:05:23.862 real 0m4.988s 00:05:23.862 user 0m8.726s 00:05:23.862 sys 0m0.341s 00:05:23.862 19:03:33 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.862 ************************************ 00:05:23.862 END TEST event_scheduler 00:05:23.862 ************************************ 00:05:23.862 19:03:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.862 19:03:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.862 19:03:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.862 19:03:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.862 19:03:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.862 19:03:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.862 ************************************ 00:05:23.862 START TEST app_repeat 00:05:23.862 ************************************ 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:23.862 Process app_repeat pid: 58466 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58466 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58466' 00:05:23.862 spdk_app_start Round 0 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:05:23.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.862 19:03:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.862 19:03:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.121 [2024-11-27 19:03:33.527905] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:24.121 [2024-11-27 19:03:33.528025] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58466 ] 00:05:24.121 [2024-11-27 19:03:33.686924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.379 [2024-11-27 19:03:33.780481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.379 [2024-11-27 19:03:33.780575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.946 19:03:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.946 19:03:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.946 19:03:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.207 Malloc0 00:05:25.207 19:03:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.481 Malloc1 00:05:25.481 19:03:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.481 19:03:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.481 /dev/nbd0 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.740 1+0 records in 00:05:25.740 1+0 records out 00:05:25.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244054 s, 16.8 MB/s 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.740 /dev/nbd1 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.740 1+0 records in 00:05:25.740 1+0 records out 00:05:25.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310444 s, 13.2 MB/s 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.740 19:03:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.740 19:03:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.999 { 00:05:25.999 "nbd_device": "/dev/nbd0", 00:05:25.999 "bdev_name": "Malloc0" 00:05:25.999 }, 00:05:25.999 { 00:05:25.999 "nbd_device": "/dev/nbd1", 00:05:25.999 "bdev_name": "Malloc1" 00:05:25.999 } 00:05:25.999 ]' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.999 { 00:05:25.999 "nbd_device": "/dev/nbd0", 00:05:25.999 "bdev_name": "Malloc0" 00:05:25.999 }, 00:05:25.999 { 00:05:25.999 "nbd_device": "/dev/nbd1", 00:05:25.999 "bdev_name": "Malloc1" 00:05:25.999 } 00:05:25.999 ]' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.999 /dev/nbd1' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.999 /dev/nbd1' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.999 256+0 records in 00:05:25.999 256+0 records out 00:05:25.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634104 s, 165 MB/s 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.999 256+0 records in 00:05:25.999 256+0 records out 00:05:25.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167613 s, 62.6 MB/s 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.999 256+0 records in 00:05:25.999 256+0 records out 00:05:25.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156813 s, 66.9 MB/s 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.999 19:03:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.258 19:03:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.517 19:03:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.775 19:03:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.775 19:03:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.034 19:03:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.600 [2024-11-27 19:03:37.223675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.859 [2024-11-27 19:03:37.317986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.859 [2024-11-27 19:03:37.318070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.859 [2024-11-27 19:03:37.427179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.859 [2024-11-27 19:03:37.427247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.396 19:03:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.396 spdk_app_start Round 1 00:05:30.396 19:03:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.396 19:03:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.396 19:03:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.396 19:03:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.655 Malloc0 00:05:30.656 19:03:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.915 Malloc1 00:05:30.915 19:03:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.915 /dev/nbd0 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.915 1+0 records in 00:05:30.915 1+0 records out 00:05:30.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028748 s, 14.2 MB/s 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.915 19:03:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.915 19:03:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.174 /dev/nbd1 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.174 1+0 records in 00:05:31.174 1+0 records out 00:05:31.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181549 s, 22.6 MB/s 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:31.174 19:03:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.174 19:03:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.433 19:03:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.433 { 00:05:31.433 "nbd_device": "/dev/nbd0", 00:05:31.433 "bdev_name": "Malloc0" 00:05:31.433 }, 00:05:31.433 { 00:05:31.433 "nbd_device": "/dev/nbd1", 00:05:31.433 "bdev_name": "Malloc1" 00:05:31.433 } 00:05:31.433 ]' 00:05:31.433 19:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.433 { 00:05:31.433 "nbd_device": "/dev/nbd0", 00:05:31.433 "bdev_name": "Malloc0" 00:05:31.433 }, 00:05:31.433 { 00:05:31.433 "nbd_device": "/dev/nbd1", 00:05:31.433 "bdev_name": "Malloc1" 00:05:31.433 } 00:05:31.433 ]' 00:05:31.433 19:03:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.433 /dev/nbd1' 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.433 /dev/nbd1' 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.433 19:03:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.434 256+0 records in 00:05:31.434 256+0 records out 00:05:31.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078136 s, 134 MB/s 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.434 256+0 records in 00:05:31.434 256+0 records out 00:05:31.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182051 s, 57.6 MB/s 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.434 19:03:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.692 256+0 records in 00:05:31.692 256+0 records out 00:05:31.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173002 s, 60.6 MB/s 00:05:31.692 19:03:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.692 19:03:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.693 19:03:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.951 19:03:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.210 19:03:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.210 19:03:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.468 19:03:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.403 [2024-11-27 19:03:42.849935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.403 [2024-11-27 19:03:42.961736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.403 [2024-11-27 19:03:42.961823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.661 [2024-11-27 19:03:43.078314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.661 [2024-11-27 19:03:43.078388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.564 19:03:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.564 spdk_app_start Round 2 00:05:35.564 19:03:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.564 19:03:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.564 19:03:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.823 19:03:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.823 19:03:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.823 19:03:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.081 Malloc0 00:05:36.082 19:03:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.340 Malloc1 00:05:36.340 19:03:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.340 19:03:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.341 /dev/nbd0 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.341 19:03:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.341 1+0 records in 00:05:36.341 1+0 records out 00:05:36.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273296 s, 15.0 MB/s 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.341 19:03:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.599 19:03:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.599 19:03:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.599 19:03:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.599 19:03:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.599 19:03:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.599 /dev/nbd1 00:05:36.599 19:03:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.599 19:03:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.599 19:03:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.599 19:03:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.599 19:03:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.599 19:03:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.599 19:03:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.600 1+0 records in 00:05:36.600 1+0 records out 00:05:36.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022386 s, 18.3 MB/s 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.600 19:03:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.600 19:03:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.600 19:03:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.600 19:03:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.600 19:03:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.600 19:03:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.859 { 00:05:36.859 "nbd_device": "/dev/nbd0", 00:05:36.859 "bdev_name": "Malloc0" 00:05:36.859 }, 00:05:36.859 { 00:05:36.859 "nbd_device": "/dev/nbd1", 00:05:36.859 "bdev_name": "Malloc1" 00:05:36.859 } 00:05:36.859 ]' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.859 { 00:05:36.859 "nbd_device": "/dev/nbd0", 00:05:36.859 "bdev_name": "Malloc0" 00:05:36.859 }, 00:05:36.859 { 00:05:36.859 "nbd_device": "/dev/nbd1", 00:05:36.859 "bdev_name": "Malloc1" 00:05:36.859 } 00:05:36.859 ]' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.859 /dev/nbd1' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.859 /dev/nbd1' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.859 256+0 records in 00:05:36.859 256+0 records out 00:05:36.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766741 s, 137 MB/s 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.859 256+0 records in 00:05:36.859 256+0 records out 00:05:36.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182379 s, 57.5 MB/s 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.859 19:03:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.151 256+0 records in 00:05:37.151 256+0 records out 00:05:37.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177665 s, 59.0 MB/s 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.151 19:03:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.410 19:03:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.669 19:03:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.669 19:03:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.927 19:03:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.492 [2024-11-27 19:03:48.122408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.751 [2024-11-27 19:03:48.207203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.751 [2024-11-27 19:03:48.207240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.751 [2024-11-27 19:03:48.320543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.751 [2024-11-27 19:03:48.320608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.288 19:03:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58466 /var/tmp/spdk-nbd.sock 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58466 ']' 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.288 19:03:50 event.app_repeat -- event/event.sh@39 -- # killprocess 58466 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58466 ']' 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58466 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58466 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.288 killing process with pid 58466 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58466' 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58466 00:05:41.288 19:03:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58466 00:05:41.856 spdk_app_start is called in Round 0. 00:05:41.856 Shutdown signal received, stop current app iteration 00:05:41.856 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:41.856 spdk_app_start is called in Round 1. 00:05:41.856 Shutdown signal received, stop current app iteration 00:05:41.856 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:41.856 spdk_app_start is called in Round 2. 00:05:41.856 Shutdown signal received, stop current app iteration 00:05:41.856 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:41.856 spdk_app_start is called in Round 3. 00:05:41.856 Shutdown signal received, stop current app iteration 00:05:41.856 19:03:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.856 19:03:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:41.856 00:05:41.856 real 0m17.815s 00:05:41.856 user 0m38.841s 00:05:41.856 sys 0m2.146s 00:05:41.856 19:03:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.856 ************************************ 00:05:41.856 END TEST app_repeat 00:05:41.856 ************************************ 00:05:41.856 19:03:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.856 19:03:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.856 19:03:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.856 19:03:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.856 19:03:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.856 19:03:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.856 ************************************ 00:05:41.856 START TEST cpu_locks 00:05:41.856 ************************************ 00:05:41.856 19:03:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:41.856 * Looking for test storage... 00:05:41.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.856 19:03:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.856 19:03:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.856 19:03:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.856 19:03:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.115 19:03:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.115 --rc genhtml_branch_coverage=1 00:05:42.115 --rc genhtml_function_coverage=1 00:05:42.115 --rc genhtml_legend=1 00:05:42.115 --rc geninfo_all_blocks=1 00:05:42.115 --rc geninfo_unexecuted_blocks=1 00:05:42.115 00:05:42.115 ' 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.115 --rc genhtml_branch_coverage=1 00:05:42.115 --rc genhtml_function_coverage=1 00:05:42.115 --rc genhtml_legend=1 00:05:42.115 --rc geninfo_all_blocks=1 00:05:42.115 --rc geninfo_unexecuted_blocks=1 00:05:42.115 00:05:42.115 ' 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.115 --rc genhtml_branch_coverage=1 00:05:42.115 --rc genhtml_function_coverage=1 00:05:42.115 --rc genhtml_legend=1 00:05:42.115 --rc geninfo_all_blocks=1 00:05:42.115 --rc geninfo_unexecuted_blocks=1 00:05:42.115 00:05:42.115 ' 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.115 --rc genhtml_branch_coverage=1 00:05:42.115 --rc genhtml_function_coverage=1 00:05:42.115 --rc genhtml_legend=1 00:05:42.115 --rc geninfo_all_blocks=1 00:05:42.115 --rc geninfo_unexecuted_blocks=1 00:05:42.115 00:05:42.115 ' 00:05:42.115 19:03:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.115 19:03:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.115 19:03:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.115 19:03:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.115 19:03:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.115 ************************************ 00:05:42.115 START TEST default_locks 00:05:42.115 ************************************ 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58896 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58896 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58896 ']' 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.115 19:03:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.115 [2024-11-27 19:03:51.593810] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:42.115 [2024-11-27 19:03:51.593922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:05:42.373 [2024-11-27 19:03:51.751215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.373 [2024-11-27 19:03:51.841495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.938 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.938 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:42.938 19:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58896 00:05:42.938 19:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58896 00:05:42.938 19:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58896 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58896 ']' 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58896 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58896 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.197 killing process with pid 58896 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58896' 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58896 00:05:43.197 19:03:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58896 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58896 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58896 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58896 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58896 ']' 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.572 ERROR: process (pid: 58896) is no longer running 00:05:44.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58896) - No such process 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.572 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.573 00:05:44.573 real 0m2.367s 00:05:44.573 user 0m2.333s 00:05:44.573 sys 0m0.471s 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.573 19:03:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.573 ************************************ 00:05:44.573 END TEST default_locks 00:05:44.573 ************************************ 00:05:44.573 19:03:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.573 19:03:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.573 19:03:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.573 19:03:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.573 ************************************ 00:05:44.573 START TEST default_locks_via_rpc 00:05:44.573 ************************************ 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58955 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58955 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.573 19:03:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.573 [2024-11-27 19:03:54.028553] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:44.573 [2024-11-27 19:03:54.028673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:05:44.573 [2024-11-27 19:03:54.183505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.831 [2024-11-27 19:03:54.274104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58955 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58955 00:05:45.395 19:03:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58955 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58955 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.653 killing process with pid 58955 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58955 00:05:45.653 19:03:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58955 00:05:47.081 00:05:47.081 real 0m2.445s 00:05:47.081 user 0m2.417s 00:05:47.081 sys 0m0.502s 00:05:47.081 19:03:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.081 ************************************ 00:05:47.081 END TEST default_locks_via_rpc 00:05:47.081 ************************************ 00:05:47.081 19:03:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 19:03:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.081 19:03:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.081 19:03:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.081 19:03:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 ************************************ 00:05:47.081 START TEST non_locking_app_on_locked_coremask 00:05:47.081 ************************************ 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59007 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59007 /var/tmp/spdk.sock 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59007 ']' 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.081 19:03:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.081 [2024-11-27 19:03:56.537827] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:47.081 [2024-11-27 19:03:56.537942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:05:47.081 [2024-11-27 19:03:56.691989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.340 [2024-11-27 19:03:56.785091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59023 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59023 /var/tmp/spdk2.sock 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59023 ']' 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.906 19:03:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.906 [2024-11-27 19:03:57.398489] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:47.906 [2024-11-27 19:03:57.398599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59023 ] 00:05:48.165 [2024-11-27 19:03:57.562158] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.165 [2024-11-27 19:03:57.562213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.165 [2024-11-27 19:03:57.770043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.540 19:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.540 19:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.540 19:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59007 00:05:49.540 19:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59007 00:05:49.540 19:03:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59007 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59007 ']' 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59007 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59007 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.540 killing process with pid 59007 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59007' 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59007 00:05:49.540 19:03:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59007 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59023 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59023 ']' 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59023 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59023 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.069 killing process with pid 59023 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59023' 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59023 00:05:52.069 19:04:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59023 00:05:53.445 00:05:53.445 real 0m6.365s 00:05:53.445 user 0m6.446s 00:05:53.445 sys 0m0.881s 00:05:53.445 19:04:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.445 ************************************ 00:05:53.445 END TEST non_locking_app_on_locked_coremask 00:05:53.445 ************************************ 00:05:53.445 19:04:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.445 19:04:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.445 19:04:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.445 19:04:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.445 19:04:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.445 ************************************ 00:05:53.445 START TEST locking_app_on_unlocked_coremask 00:05:53.445 ************************************ 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59125 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59125 /var/tmp/spdk.sock 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59125 ']' 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.445 19:04:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.445 [2024-11-27 19:04:02.939095] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:53.445 [2024-11-27 19:04:02.939236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59125 ] 00:05:53.705 [2024-11-27 19:04:03.096134] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.705 [2024-11-27 19:04:03.096183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.705 [2024-11-27 19:04:03.186861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.271 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.271 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59130 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59130 /var/tmp/spdk2.sock 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.272 19:04:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.272 [2024-11-27 19:04:03.836201] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:54.272 [2024-11-27 19:04:03.836482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:05:54.530 [2024-11-27 19:04:03.997998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.788 [2024-11-27 19:04:04.181105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.722 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.722 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.722 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59130 00:05:55.723 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.723 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59125 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59125 ']' 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59125 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59125 00:05:55.982 killing process with pid 59125 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59125' 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59125 00:05:55.982 19:04:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59125 00:05:58.512 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59130 00:05:58.512 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:05:58.512 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:05:58.512 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:05:58.513 killing process with pid 59130 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:05:58.513 19:04:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:05:59.889 ************************************ 00:05:59.889 END TEST locking_app_on_unlocked_coremask 00:05:59.889 ************************************ 00:05:59.889 00:05:59.889 real 0m6.523s 00:05:59.889 user 0m6.652s 00:05:59.889 sys 0m0.951s 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.889 19:04:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.889 19:04:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.889 19:04:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.889 19:04:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.889 ************************************ 00:05:59.889 START TEST locking_app_on_locked_coremask 00:05:59.889 ************************************ 00:05:59.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59232 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59232 /var/tmp/spdk.sock 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59232 ']' 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.889 19:04:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.889 [2024-11-27 19:04:09.511715] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:59.889 [2024-11-27 19:04:09.511834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:06:00.146 [2024-11-27 19:04:09.663433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.146 [2024-11-27 19:04:09.767457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59248 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59248 /var/tmp/spdk2.sock 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59248 /var/tmp/spdk2.sock 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:00.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59248 /var/tmp/spdk2.sock 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59248 ']' 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.712 19:04:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.970 [2024-11-27 19:04:10.372566] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:00.970 [2024-11-27 19:04:10.372659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:06:00.970 [2024-11-27 19:04:10.530204] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59232 has claimed it. 00:06:00.970 [2024-11-27 19:04:10.530254] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.537 ERROR: process (pid: 59248) is no longer running 00:06:01.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59248) - No such process 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59232 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59232 00:06:01.537 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59232 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59232 ']' 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59232 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59232 00:06:01.796 killing process with pid 59232 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59232' 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59232 00:06:01.796 19:04:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59232 00:06:03.169 00:06:03.169 real 0m3.022s 00:06:03.169 user 0m3.161s 00:06:03.169 sys 0m0.554s 00:06:03.169 19:04:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.169 ************************************ 00:06:03.169 END TEST locking_app_on_locked_coremask 00:06:03.169 ************************************ 00:06:03.169 19:04:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.169 19:04:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.169 19:04:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.169 19:04:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.169 19:04:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.169 ************************************ 00:06:03.169 START TEST locking_overlapped_coremask 00:06:03.169 ************************************ 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:03.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59301 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59301 /var/tmp/spdk.sock 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.169 19:04:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.169 [2024-11-27 19:04:12.587935] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:03.169 [2024-11-27 19:04:12.588036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:06:03.169 [2024-11-27 19:04:12.740342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.427 [2024-11-27 19:04:12.850423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.427 [2024-11-27 19:04:12.850739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.427 [2024-11-27 19:04:12.850753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59319 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59319 /var/tmp/spdk2.sock 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:03.994 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59319 /var/tmp/spdk2.sock 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:03.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59319 /var/tmp/spdk2.sock 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.995 19:04:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.995 [2024-11-27 19:04:13.485337] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:03.995 [2024-11-27 19:04:13.485451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:06:04.253 [2024-11-27 19:04:13.660368] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59301 has claimed it. 00:06:04.253 [2024-11-27 19:04:13.660431] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.511 ERROR: process (pid: 59319) is no longer running 00:06:04.511 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59319) - No such process 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.511 19:04:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59301 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59301 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.512 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 00:06:04.770 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.770 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.770 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 00:06:04.770 killing process with pid 59301 00:06:04.770 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59301 00:06:04.770 19:04:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59301 00:06:06.146 00:06:06.146 real 0m2.883s 00:06:06.146 user 0m7.727s 00:06:06.146 sys 0m0.495s 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.146 ************************************ 00:06:06.146 END TEST locking_overlapped_coremask 00:06:06.146 ************************************ 00:06:06.146 19:04:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.146 19:04:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.146 19:04:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.146 19:04:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.146 ************************************ 00:06:06.146 START TEST locking_overlapped_coremask_via_rpc 00:06:06.146 ************************************ 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:06.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59372 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59372 /var/tmp/spdk.sock 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.146 19:04:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.146 [2024-11-27 19:04:15.515845] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:06.146 [2024-11-27 19:04:15.516101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:06:06.146 [2024-11-27 19:04:15.667400] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.146 [2024-11-27 19:04:15.667433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.146 [2024-11-27 19:04:15.759669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.146 [2024-11-27 19:04:15.759943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.146 [2024-11-27 19:04:15.760046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59389 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.713 19:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.971 [2024-11-27 19:04:16.376259] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:06.971 [2024-11-27 19:04:16.376564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:06:06.971 [2024-11-27 19:04:16.540426] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.971 [2024-11-27 19:04:16.540479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.229 [2024-11-27 19:04:16.715981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.229 [2024-11-27 19:04:16.715912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.230 [2024-11-27 19:04:16.716015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.164 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.165 [2024-11-27 19:04:17.694276] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59372 has claimed it. 00:06:08.165 request: 00:06:08.165 { 00:06:08.165 "method": "framework_enable_cpumask_locks", 00:06:08.165 "req_id": 1 00:06:08.165 } 00:06:08.165 Got JSON-RPC error response 00:06:08.165 response: 00:06:08.165 { 00:06:08.165 "code": -32603, 00:06:08.165 "message": "Failed to claim CPU core: 2" 00:06:08.165 } 00:06:08.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59372 /var/tmp/spdk.sock 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.165 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.423 19:04:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.682 ************************************ 00:06:08.682 END TEST locking_overlapped_coremask_via_rpc 00:06:08.682 ************************************ 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.682 00:06:08.682 real 0m2.673s 00:06:08.682 user 0m1.015s 00:06:08.682 sys 0m0.138s 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.682 19:04:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.682 19:04:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.682 19:04:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59372 ]] 00:06:08.682 19:04:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59372 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59372 ']' 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59372 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59372 00:06:08.682 killing process with pid 59372 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59372' 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59372 00:06:08.682 19:04:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59372 00:06:10.059 19:04:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:06:10.059 19:04:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59389 ']' 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59389 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59389 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.059 killing process with pid 59389 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59389' 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59389 00:06:10.059 19:04:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59389 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59372 ]] 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59372 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59372 ']' 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59372 00:06:11.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59372) - No such process 00:06:11.445 Process with pid 59372 is not found 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59372 is not found' 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59389 ]] 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59389 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59389 ']' 00:06:11.445 Process with pid 59389 is not found 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59389 00:06:11.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59389) - No such process 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59389 is not found' 00:06:11.445 19:04:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.445 00:06:11.445 real 0m29.671s 00:06:11.445 user 0m50.208s 00:06:11.445 sys 0m4.808s 00:06:11.445 ************************************ 00:06:11.445 END TEST cpu_locks 00:06:11.445 ************************************ 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.445 19:04:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.445 ************************************ 00:06:11.445 END TEST event 00:06:11.445 ************************************ 00:06:11.445 00:06:11.445 real 0m57.131s 00:06:11.445 user 1m44.627s 00:06:11.445 sys 0m7.731s 00:06:11.445 19:04:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.445 19:04:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.707 19:04:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:11.707 19:04:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.707 19:04:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.707 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:06:11.707 ************************************ 00:06:11.707 START TEST thread 00:06:11.707 ************************************ 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:11.708 * Looking for test storage... 00:06:11.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.708 19:04:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.708 19:04:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.708 19:04:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.708 19:04:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.708 19:04:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.708 19:04:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.708 19:04:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.708 19:04:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.708 19:04:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.708 19:04:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.708 19:04:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.708 19:04:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:11.708 19:04:21 thread -- scripts/common.sh@345 -- # : 1 00:06:11.708 19:04:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.708 19:04:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.708 19:04:21 thread -- scripts/common.sh@365 -- # decimal 1 00:06:11.708 19:04:21 thread -- scripts/common.sh@353 -- # local d=1 00:06:11.708 19:04:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.708 19:04:21 thread -- scripts/common.sh@355 -- # echo 1 00:06:11.708 19:04:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.708 19:04:21 thread -- scripts/common.sh@366 -- # decimal 2 00:06:11.708 19:04:21 thread -- scripts/common.sh@353 -- # local d=2 00:06:11.708 19:04:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.708 19:04:21 thread -- scripts/common.sh@355 -- # echo 2 00:06:11.708 19:04:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.708 19:04:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.708 19:04:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.708 19:04:21 thread -- scripts/common.sh@368 -- # return 0 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.708 --rc genhtml_branch_coverage=1 00:06:11.708 --rc genhtml_function_coverage=1 00:06:11.708 --rc genhtml_legend=1 00:06:11.708 --rc geninfo_all_blocks=1 00:06:11.708 --rc geninfo_unexecuted_blocks=1 00:06:11.708 00:06:11.708 ' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.708 --rc genhtml_branch_coverage=1 00:06:11.708 --rc genhtml_function_coverage=1 00:06:11.708 --rc genhtml_legend=1 00:06:11.708 --rc geninfo_all_blocks=1 00:06:11.708 --rc geninfo_unexecuted_blocks=1 00:06:11.708 00:06:11.708 ' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.708 --rc genhtml_branch_coverage=1 00:06:11.708 --rc genhtml_function_coverage=1 00:06:11.708 --rc genhtml_legend=1 00:06:11.708 --rc geninfo_all_blocks=1 00:06:11.708 --rc geninfo_unexecuted_blocks=1 00:06:11.708 00:06:11.708 ' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.708 --rc genhtml_branch_coverage=1 00:06:11.708 --rc genhtml_function_coverage=1 00:06:11.708 --rc genhtml_legend=1 00:06:11.708 --rc geninfo_all_blocks=1 00:06:11.708 --rc geninfo_unexecuted_blocks=1 00:06:11.708 00:06:11.708 ' 00:06:11.708 19:04:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.708 19:04:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.708 ************************************ 00:06:11.708 START TEST thread_poller_perf 00:06:11.708 ************************************ 00:06:11.708 19:04:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.708 [2024-11-27 19:04:21.302705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:11.708 [2024-11-27 19:04:21.303297] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59545 ] 00:06:11.968 [2024-11-27 19:04:21.459216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.968 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.968 [2024-11-27 19:04:21.553888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.394 [2024-11-27T19:04:23.029Z] ====================================== 00:06:13.394 [2024-11-27T19:04:23.029Z] busy:2607206240 (cyc) 00:06:13.394 [2024-11-27T19:04:23.029Z] total_run_count: 403000 00:06:13.394 [2024-11-27T19:04:23.029Z] tsc_hz: 2600000000 (cyc) 00:06:13.394 [2024-11-27T19:04:23.029Z] ====================================== 00:06:13.394 [2024-11-27T19:04:23.029Z] poller_cost: 6469 (cyc), 2488 (nsec) 00:06:13.394 ************************************ 00:06:13.394 END TEST thread_poller_perf 00:06:13.394 ************************************ 00:06:13.394 00:06:13.394 real 0m1.415s 00:06:13.394 user 0m1.235s 00:06:13.394 sys 0m0.072s 00:06:13.394 19:04:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.394 19:04:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.394 19:04:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.394 19:04:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:13.394 19:04:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.394 19:04:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.394 ************************************ 00:06:13.394 START TEST thread_poller_perf 00:06:13.394 ************************************ 00:06:13.394 19:04:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.394 [2024-11-27 19:04:22.761634] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:13.394 [2024-11-27 19:04:22.761850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59581 ] 00:06:13.394 [2024-11-27 19:04:22.915937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.394 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.394 [2024-11-27 19:04:23.003504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.770 [2024-11-27T19:04:24.405Z] ====================================== 00:06:14.770 [2024-11-27T19:04:24.405Z] busy:2602496378 (cyc) 00:06:14.770 [2024-11-27T19:04:24.405Z] total_run_count: 5267000 00:06:14.770 [2024-11-27T19:04:24.405Z] tsc_hz: 2600000000 (cyc) 00:06:14.770 [2024-11-27T19:04:24.405Z] ====================================== 00:06:14.770 [2024-11-27T19:04:24.405Z] poller_cost: 494 (cyc), 190 (nsec) 00:06:14.770 ************************************ 00:06:14.770 END TEST thread_poller_perf 00:06:14.770 ************************************ 00:06:14.770 00:06:14.770 real 0m1.403s 00:06:14.770 user 0m1.226s 00:06:14.770 sys 0m0.071s 00:06:14.770 19:04:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.770 19:04:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.770 19:04:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:14.770 ************************************ 00:06:14.770 END TEST thread 00:06:14.770 ************************************ 00:06:14.770 00:06:14.770 real 0m3.058s 00:06:14.770 user 0m2.560s 00:06:14.770 sys 0m0.271s 00:06:14.770 19:04:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.770 19:04:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.770 19:04:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:14.770 19:04:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:14.770 19:04:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.770 19:04:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.770 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:06:14.770 ************************************ 00:06:14.770 START TEST app_cmdline 00:06:14.770 ************************************ 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:14.770 * Looking for test storage... 00:06:14.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:14.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.770 19:04:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.770 --rc genhtml_branch_coverage=1 00:06:14.770 --rc genhtml_function_coverage=1 00:06:14.770 --rc genhtml_legend=1 00:06:14.770 --rc geninfo_all_blocks=1 00:06:14.770 --rc geninfo_unexecuted_blocks=1 00:06:14.770 00:06:14.770 ' 00:06:14.770 19:04:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.771 --rc genhtml_branch_coverage=1 00:06:14.771 --rc genhtml_function_coverage=1 00:06:14.771 --rc genhtml_legend=1 00:06:14.771 --rc geninfo_all_blocks=1 00:06:14.771 --rc geninfo_unexecuted_blocks=1 00:06:14.771 00:06:14.771 ' 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.771 --rc genhtml_branch_coverage=1 00:06:14.771 --rc genhtml_function_coverage=1 00:06:14.771 --rc genhtml_legend=1 00:06:14.771 --rc geninfo_all_blocks=1 00:06:14.771 --rc geninfo_unexecuted_blocks=1 00:06:14.771 00:06:14.771 ' 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.771 --rc genhtml_branch_coverage=1 00:06:14.771 --rc genhtml_function_coverage=1 00:06:14.771 --rc genhtml_legend=1 00:06:14.771 --rc geninfo_all_blocks=1 00:06:14.771 --rc geninfo_unexecuted_blocks=1 00:06:14.771 00:06:14.771 ' 00:06:14.771 19:04:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.771 19:04:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59665 00:06:14.771 19:04:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59665 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59665 ']' 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.771 19:04:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.771 19:04:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 [2024-11-27 19:04:24.463917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:15.029 [2024-11-27 19:04:24.464223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:06:15.029 [2024-11-27 19:04:24.621353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.287 [2024-11-27 19:04:24.715893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.854 19:04:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.854 19:04:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:15.854 19:04:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:15.854 { 00:06:15.854 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:15.854 "fields": { 00:06:15.854 "major": 25, 00:06:15.854 "minor": 1, 00:06:15.854 "patch": 0, 00:06:15.854 "suffix": "-pre", 00:06:15.854 "commit": "35cd3e84d" 00:06:15.854 } 00:06:15.854 } 00:06:15.854 19:04:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:15.854 19:04:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:15.854 19:04:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:15.854 19:04:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:16.112 19:04:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.112 request: 00:06:16.112 { 00:06:16.112 "method": "env_dpdk_get_mem_stats", 00:06:16.112 "req_id": 1 00:06:16.112 } 00:06:16.112 Got JSON-RPC error response 00:06:16.112 response: 00:06:16.112 { 00:06:16.112 "code": -32601, 00:06:16.112 "message": "Method not found" 00:06:16.112 } 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.112 19:04:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.113 19:04:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59665 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59665 ']' 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59665 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.113 19:04:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59665 00:06:16.371 killing process with pid 59665 00:06:16.371 19:04:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.371 19:04:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.371 19:04:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59665' 00:06:16.371 19:04:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 59665 00:06:16.371 19:04:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 59665 00:06:17.747 ************************************ 00:06:17.747 END TEST app_cmdline 00:06:17.747 ************************************ 00:06:17.747 00:06:17.747 real 0m2.758s 00:06:17.747 user 0m2.992s 00:06:17.747 sys 0m0.504s 00:06:17.747 19:04:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.747 19:04:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.747 19:04:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.747 19:04:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.747 19:04:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.747 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.747 ************************************ 00:06:17.747 START TEST version 00:06:17.747 ************************************ 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.747 * Looking for test storage... 00:06:17.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.747 19:04:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.747 19:04:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.747 19:04:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.747 19:04:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.747 19:04:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.747 19:04:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.747 19:04:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.747 19:04:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.747 19:04:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.747 19:04:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.747 19:04:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.747 19:04:27 version -- scripts/common.sh@344 -- # case "$op" in 00:06:17.747 19:04:27 version -- scripts/common.sh@345 -- # : 1 00:06:17.747 19:04:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.747 19:04:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.747 19:04:27 version -- scripts/common.sh@365 -- # decimal 1 00:06:17.747 19:04:27 version -- scripts/common.sh@353 -- # local d=1 00:06:17.747 19:04:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.747 19:04:27 version -- scripts/common.sh@355 -- # echo 1 00:06:17.747 19:04:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.747 19:04:27 version -- scripts/common.sh@366 -- # decimal 2 00:06:17.747 19:04:27 version -- scripts/common.sh@353 -- # local d=2 00:06:17.747 19:04:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.747 19:04:27 version -- scripts/common.sh@355 -- # echo 2 00:06:17.747 19:04:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.747 19:04:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.747 19:04:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.747 19:04:27 version -- scripts/common.sh@368 -- # return 0 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.747 --rc genhtml_branch_coverage=1 00:06:17.747 --rc genhtml_function_coverage=1 00:06:17.747 --rc genhtml_legend=1 00:06:17.747 --rc geninfo_all_blocks=1 00:06:17.747 --rc geninfo_unexecuted_blocks=1 00:06:17.747 00:06:17.747 ' 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.747 --rc genhtml_branch_coverage=1 00:06:17.747 --rc genhtml_function_coverage=1 00:06:17.747 --rc genhtml_legend=1 00:06:17.747 --rc geninfo_all_blocks=1 00:06:17.747 --rc geninfo_unexecuted_blocks=1 00:06:17.747 00:06:17.747 ' 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.747 --rc genhtml_branch_coverage=1 00:06:17.747 --rc genhtml_function_coverage=1 00:06:17.747 --rc genhtml_legend=1 00:06:17.747 --rc geninfo_all_blocks=1 00:06:17.747 --rc geninfo_unexecuted_blocks=1 00:06:17.747 00:06:17.747 ' 00:06:17.747 19:04:27 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.747 --rc genhtml_branch_coverage=1 00:06:17.747 --rc genhtml_function_coverage=1 00:06:17.747 --rc genhtml_legend=1 00:06:17.747 --rc geninfo_all_blocks=1 00:06:17.747 --rc geninfo_unexecuted_blocks=1 00:06:17.747 00:06:17.747 ' 00:06:17.747 19:04:27 version -- app/version.sh@17 -- # get_header_version major 00:06:17.747 19:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # cut -f2 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.747 19:04:27 version -- app/version.sh@17 -- # major=25 00:06:17.747 19:04:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # cut -f2 00:06:17.747 19:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.747 19:04:27 version -- app/version.sh@18 -- # minor=1 00:06:17.747 19:04:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.747 19:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # cut -f2 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.747 19:04:27 version -- app/version.sh@19 -- # patch=0 00:06:17.747 19:04:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # cut -f2 00:06:17.747 19:04:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.747 19:04:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.747 19:04:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.747 19:04:27 version -- app/version.sh@22 -- # version=25.1 00:06:17.747 19:04:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.747 19:04:27 version -- app/version.sh@28 -- # version=25.1rc0 00:06:17.747 19:04:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:17.747 19:04:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.747 19:04:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:17.747 19:04:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:17.747 00:06:17.747 real 0m0.211s 00:06:17.747 user 0m0.122s 00:06:17.747 sys 0m0.114s 00:06:17.748 19:04:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.748 ************************************ 00:06:17.748 END TEST version 00:06:17.748 ************************************ 00:06:17.748 19:04:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.748 19:04:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:17.748 19:04:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:17.748 19:04:27 -- spdk/autotest.sh@194 -- # uname -s 00:06:17.748 19:04:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:17.748 19:04:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:17.748 19:04:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:17.748 19:04:27 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:17.748 19:04:27 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:17.748 19:04:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.748 19:04:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.748 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.748 ************************************ 00:06:17.748 START TEST blockdev_nvme 00:06:17.748 ************************************ 00:06:17.748 19:04:27 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:18.014 * Looking for test storage... 00:06:18.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:18.014 19:04:27 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.014 19:04:27 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.015 19:04:27 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.015 --rc genhtml_branch_coverage=1 00:06:18.015 --rc genhtml_function_coverage=1 00:06:18.015 --rc genhtml_legend=1 00:06:18.015 --rc geninfo_all_blocks=1 00:06:18.015 --rc geninfo_unexecuted_blocks=1 00:06:18.015 00:06:18.015 ' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.015 --rc genhtml_branch_coverage=1 00:06:18.015 --rc genhtml_function_coverage=1 00:06:18.015 --rc genhtml_legend=1 00:06:18.015 --rc geninfo_all_blocks=1 00:06:18.015 --rc geninfo_unexecuted_blocks=1 00:06:18.015 00:06:18.015 ' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.015 --rc genhtml_branch_coverage=1 00:06:18.015 --rc genhtml_function_coverage=1 00:06:18.015 --rc genhtml_legend=1 00:06:18.015 --rc geninfo_all_blocks=1 00:06:18.015 --rc geninfo_unexecuted_blocks=1 00:06:18.015 00:06:18.015 ' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.015 --rc genhtml_branch_coverage=1 00:06:18.015 --rc genhtml_function_coverage=1 00:06:18.015 --rc genhtml_legend=1 00:06:18.015 --rc geninfo_all_blocks=1 00:06:18.015 --rc geninfo_unexecuted_blocks=1 00:06:18.015 00:06:18.015 ' 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.015 19:04:27 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59837 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59837 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59837 ']' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.015 19:04:27 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.015 19:04:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 [2024-11-27 19:04:27.591056] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:18.015 [2024-11-27 19:04:27.591462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:06:18.274 [2024-11-27 19:04:27.751358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.274 [2024-11-27 19:04:27.860658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.840 19:04:28 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.841 19:04:28 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:18.841 19:04:28 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:18.841 19:04:28 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:18.841 19:04:28 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:18.841 19:04:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:18.841 19:04:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:19.099 19:04:28 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:19.099 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.099 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:19.358 19:04:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:19.358 19:04:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4686e216-f5cf-4d77-9ab3-122f5b201132"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4686e216-f5cf-4d77-9ab3-122f5b201132",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "328bc15a-ace5-4ddf-bd16-5830b501001b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "328bc15a-ace5-4ddf-bd16-5830b501001b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "25a28b67-ddfd-4b7d-95d9-6e41c1b70ceb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "25a28b67-ddfd-4b7d-95d9-6e41c1b70ceb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ba079523-fc65-4a0f-8aee-5bae8132808a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba079523-fc65-4a0f-8aee-5bae8132808a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dcbdff56-49e2-4c77-95cc-933dad72f04f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dcbdff56-49e2-4c77-95cc-933dad72f04f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "57393bc0-e199-402d-b7e8-674b7391aee2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "57393bc0-e199-402d-b7e8-674b7391aee2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:19.359 19:04:28 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59837 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59837 ']' 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59837 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59837 00:06:19.359 killing process with pid 59837 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59837' 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59837 00:06:19.359 19:04:28 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59837 00:06:20.735 19:04:30 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:20.735 19:04:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:20.735 19:04:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:20.735 19:04:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.735 19:04:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.735 ************************************ 00:06:20.735 START TEST bdev_hello_world 00:06:20.735 ************************************ 00:06:20.735 19:04:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:20.735 [2024-11-27 19:04:30.267862] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:20.735 [2024-11-27 19:04:30.268179] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:06:20.994 [2024-11-27 19:04:30.424339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.994 [2024-11-27 19:04:30.520705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.560 [2024-11-27 19:04:31.036938] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:21.560 [2024-11-27 19:04:31.037113] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:21.560 [2024-11-27 19:04:31.037163] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:21.560 [2024-11-27 19:04:31.039293] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:21.560 [2024-11-27 19:04:31.039797] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:21.560 [2024-11-27 19:04:31.039875] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:21.560 [2024-11-27 19:04:31.040120] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:21.560 00:06:21.560 [2024-11-27 19:04:31.040199] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:22.129 00:06:22.129 real 0m1.435s 00:06:22.129 user 0m1.139s 00:06:22.129 sys 0m0.190s 00:06:22.129 19:04:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.129 19:04:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:22.129 ************************************ 00:06:22.129 END TEST bdev_hello_world 00:06:22.129 ************************************ 00:06:22.129 19:04:31 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:22.129 19:04:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:22.129 19:04:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.129 19:04:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.129 ************************************ 00:06:22.129 START TEST bdev_bounds 00:06:22.129 ************************************ 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59952 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59952' 00:06:22.129 Process bdevio pid: 59952 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59952 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59952 ']' 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.129 19:04:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:22.458 [2024-11-27 19:04:31.770412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:22.458 [2024-11-27 19:04:31.771020] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:06:22.458 [2024-11-27 19:04:31.928317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.458 [2024-11-27 19:04:32.026460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.458 [2024-11-27 19:04:32.026776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.458 [2024-11-27 19:04:32.026790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.026 19:04:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.026 19:04:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:23.026 19:04:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:23.285 I/O targets: 00:06:23.285 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:23.285 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:23.285 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:23.285 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:23.285 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:23.285 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:23.285 00:06:23.285 00:06:23.285 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.285 http://cunit.sourceforge.net/ 00:06:23.285 00:06:23.285 00:06:23.285 Suite: bdevio tests on: Nvme3n1 00:06:23.285 Test: blockdev write read block ...passed 00:06:23.285 Test: blockdev write zeroes read block ...passed 00:06:23.285 Test: blockdev write zeroes read no split ...passed 00:06:23.285 Test: blockdev write zeroes read split ...passed 00:06:23.285 Test: blockdev write zeroes read split partial ...passed 00:06:23.285 Test: blockdev reset ...[2024-11-27 19:04:32.751163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:23.285 [2024-11-27 19:04:32.755467] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:23.285 Test: blockdev write read 8 blocks ...uccessful. 00:06:23.285 passed 00:06:23.285 Test: blockdev write read size > 128k ...passed 00:06:23.285 Test: blockdev write read invalid size ...passed 00:06:23.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.285 Test: blockdev write read max offset ...passed 00:06:23.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.285 Test: blockdev writev readv 8 blocks ...passed 00:06:23.285 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.285 Test: blockdev writev readv block ...passed 00:06:23.285 Test: blockdev writev readv size > 128k ...passed 00:06:23.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.285 Test: blockdev comparev and writev ...[2024-11-27 19:04:32.771790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1c0a000 len:0x1000 00:06:23.285 [2024-11-27 19:04:32.771853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:23.285 passed 00:06:23.285 Test: blockdev nvme passthru rw ...passed 00:06:23.285 Test: blockdev nvme passthru vendor specific ...passed 00:06:23.285 Test: blockdev nvme admin passthru ...[2024-11-27 19:04:32.773901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:23.285 [2024-11-27 19:04:32.773939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:23.285 passed 00:06:23.285 Test: blockdev copy ...passed 00:06:23.285 Suite: bdevio tests on: Nvme2n3 00:06:23.285 Test: blockdev write read block ...passed 00:06:23.285 Test: blockdev write zeroes read block ...passed 00:06:23.285 Test: blockdev write zeroes read no split ...passed 00:06:23.285 Test: blockdev write zeroes read split ...passed 00:06:23.285 Test: blockdev write zeroes read split partial ...passed 00:06:23.285 Test: blockdev reset ...[2024-11-27 19:04:32.831677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:23.285 [2024-11-27 19:04:32.835170] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:23.285 passed 00:06:23.285 Test: blockdev write read 8 blocks ...passed 00:06:23.285 Test: blockdev write read size > 128k ...passed 00:06:23.285 Test: blockdev write read invalid size ...passed 00:06:23.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.285 Test: blockdev write read max offset ...passed 00:06:23.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.285 Test: blockdev writev readv 8 blocks ...passed 00:06:23.285 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.285 Test: blockdev writev readv block ...passed 00:06:23.285 Test: blockdev writev readv size > 128k ...passed 00:06:23.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.285 Test: blockdev comparev and writev ...[2024-11-27 19:04:32.849860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:23.285 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x294e06000 len:0x1000 00:06:23.285 [2024-11-27 19:04:32.850004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:23.285 passed 00:06:23.285 Test: blockdev nvme passthru vendor specific ...passed 00:06:23.285 Test: blockdev nvme admin passthru ...[2024-11-27 19:04:32.851425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:23.285 [2024-11-27 19:04:32.851456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:23.285 passed 00:06:23.285 Test: blockdev copy ...passed 00:06:23.285 Suite: bdevio tests on: Nvme2n2 00:06:23.285 Test: blockdev write read block ...passed 00:06:23.285 Test: blockdev write zeroes read block ...passed 00:06:23.285 Test: blockdev write zeroes read no split ...passed 00:06:23.285 Test: blockdev write zeroes read split ...passed 00:06:23.285 Test: blockdev write zeroes read split partial ...passed 00:06:23.285 Test: blockdev reset ...[2024-11-27 19:04:32.898599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:23.285 passed 00:06:23.285 Test: blockdev write read 8 blocks ...[2024-11-27 19:04:32.901588] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:23.285 passed 00:06:23.285 Test: blockdev write read size > 128k ...passed 00:06:23.285 Test: blockdev write read invalid size ...passed 00:06:23.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.286 Test: blockdev write read max offset ...passed 00:06:23.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.286 Test: blockdev writev readv 8 blocks ...passed 00:06:23.286 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.286 Test: blockdev writev readv block ...passed 00:06:23.286 Test: blockdev writev readv size > 128k ...passed 00:06:23.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.286 Test: blockdev comparev and writev ...[2024-11-27 19:04:32.907880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf83c000 len:0x1000 00:06:23.286 [2024-11-27 19:04:32.907922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:23.286 passed 00:06:23.286 Test: blockdev nvme passthru rw ...passed 00:06:23.286 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:04:32.908942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:23.286 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:23.286 [2024-11-27 19:04:32.909036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:23.286 passed 00:06:23.543 Test: blockdev copy ...passed 00:06:23.543 Suite: bdevio tests on: Nvme2n1 00:06:23.543 Test: blockdev write read block ...passed 00:06:23.543 Test: blockdev write zeroes read block ...passed 00:06:23.543 Test: blockdev write zeroes read no split ...passed 00:06:23.543 Test: blockdev write zeroes read split ...passed 00:06:23.543 Test: blockdev write zeroes read split partial ...passed 00:06:23.543 Test: blockdev reset ...[2024-11-27 19:04:32.973349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:23.543 [2024-11-27 19:04:32.976139] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:23.543 Test: blockdev write read 8 blocks ...uccessful. 00:06:23.543 passed 00:06:23.543 Test: blockdev write read size > 128k ...passed 00:06:23.543 Test: blockdev write read invalid size ...passed 00:06:23.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.543 Test: blockdev write read max offset ...passed 00:06:23.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.543 Test: blockdev writev readv 8 blocks ...passed 00:06:23.543 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.543 Test: blockdev writev readv block ...passed 00:06:23.543 Test: blockdev writev readv size > 128k ...passed 00:06:23.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.543 Test: blockdev comparev and writev ...[2024-11-27 19:04:32.982112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf838000 len:0x1000 00:06:23.544 [2024-11-27 19:04:32.982157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:23.544 passed 00:06:23.544 Test: blockdev nvme passthru rw ...passed 00:06:23.544 Test: blockdev nvme passthru vendor specific ...passed 00:06:23.544 Test: blockdev nvme admin passthru ...[2024-11-27 19:04:32.982670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:23.544 [2024-11-27 19:04:32.982694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:23.544 passed 00:06:23.544 Test: blockdev copy ...passed 00:06:23.544 Suite: bdevio tests on: Nvme1n1 00:06:23.544 Test: blockdev write read block ...passed 00:06:23.544 Test: blockdev write zeroes read block ...passed 00:06:23.544 Test: blockdev write zeroes read no split ...passed 00:06:23.544 Test: blockdev write zeroes read split ...passed 00:06:23.544 Test: blockdev write zeroes read split partial ...passed 00:06:23.544 Test: blockdev reset ...[2024-11-27 19:04:33.024341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:23.544 [2024-11-27 19:04:33.026807] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:23.544 00:06:23.544 Test: blockdev write read 8 blocks ...passed 00:06:23.544 Test: blockdev write read size > 128k ...passed 00:06:23.544 Test: blockdev write read invalid size ...passed 00:06:23.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.544 Test: blockdev write read max offset ...passed 00:06:23.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.544 Test: blockdev writev readv 8 blocks ...passed 00:06:23.544 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.544 Test: blockdev writev readv block ...passed 00:06:23.544 Test: blockdev writev readv size > 128k ...passed 00:06:23.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.544 Test: blockdev comparev and writev ...[2024-11-27 19:04:33.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf834000 len:0x1000 00:06:23.544 [2024-11-27 19:04:33.033878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:23.544 passed 00:06:23.544 Test: blockdev nvme passthru rw ...passed 00:06:23.544 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:04:33.034574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:23.544 [2024-11-27 19:04:33.034654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:23.544 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:23.544 passed 00:06:23.544 Test: blockdev copy ...passed 00:06:23.544 Suite: bdevio tests on: Nvme0n1 00:06:23.544 Test: blockdev write read block ...passed 00:06:23.544 Test: blockdev write zeroes read block ...passed 00:06:23.544 Test: blockdev write zeroes read no split ...passed 00:06:23.544 Test: blockdev write zeroes read split ...passed 00:06:23.544 Test: blockdev write zeroes read split partial ...passed 00:06:23.544 Test: blockdev reset ...[2024-11-27 19:04:33.094433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:23.544 [2024-11-27 19:04:33.096825] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:23.544 passed 00:06:23.544 Test: blockdev write read 8 blocks ...passed 00:06:23.544 Test: blockdev write read size > 128k ...passed 00:06:23.544 Test: blockdev write read invalid size ...passed 00:06:23.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:23.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:23.544 Test: blockdev write read max offset ...passed 00:06:23.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:23.544 Test: blockdev writev readv 8 blocks ...passed 00:06:23.544 Test: blockdev writev readv 30 x 1block ...passed 00:06:23.544 Test: blockdev writev readv block ...passed 00:06:23.544 Test: blockdev writev readv size > 128k ...passed 00:06:23.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:23.544 Test: blockdev comparev and writev ...[2024-11-27 19:04:33.102758] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:23.544 separate metadata which is not supported yet. 00:06:23.544 passed 00:06:23.544 Test: blockdev nvme passthru rw ...passed 00:06:23.544 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:04:33.103199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:23.544 [2024-11-27 19:04:33.103258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:23.544 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:23.544 passed 00:06:23.544 Test: blockdev copy ...passed 00:06:23.544 00:06:23.544 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.544 suites 6 6 n/a 0 0 00:06:23.544 tests 138 138 138 0 0 00:06:23.544 asserts 893 893 893 0 n/a 00:06:23.544 00:06:23.544 Elapsed time = 1.041 seconds 00:06:23.544 0 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59952 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59952 ']' 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59952 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59952 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.544 killing process with pid 59952 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59952' 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59952 00:06:23.544 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59952 00:06:24.479 19:04:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:24.479 00:06:24.479 real 0m2.117s 00:06:24.479 user 0m5.384s 00:06:24.479 sys 0m0.306s 00:06:24.479 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.479 ************************************ 00:06:24.479 END TEST bdev_bounds 00:06:24.479 ************************************ 00:06:24.479 19:04:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:24.479 19:04:33 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:24.479 19:04:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:24.479 19:04:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.479 19:04:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.479 ************************************ 00:06:24.479 START TEST bdev_nbd 00:06:24.479 ************************************ 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:24.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60006 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60006 /var/tmp/spdk-nbd.sock 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60006 ']' 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:24.479 19:04:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:24.479 [2024-11-27 19:04:33.933630] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.479 [2024-11-27 19:04:33.933901] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.479 [2024-11-27 19:04:34.091343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.737 [2024-11-27 19:04:34.183030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.302 1+0 records in 00:06:25.302 1+0 records out 00:06:25.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516331 s, 7.9 MB/s 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.302 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.303 19:04:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.560 1+0 records in 00:06:25.560 1+0 records out 00:06:25.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720481 s, 5.7 MB/s 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.560 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:25.817 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:25.817 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:25.817 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:25.817 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:25.817 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.818 1+0 records in 00:06:25.818 1+0 records out 00:06:25.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436046 s, 9.4 MB/s 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:25.818 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.075 1+0 records in 00:06:26.075 1+0 records out 00:06:26.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115153 s, 3.6 MB/s 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:26.075 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.336 1+0 records in 00:06:26.336 1+0 records out 00:06:26.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705701 s, 5.8 MB/s 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:26.336 19:04:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.597 1+0 records in 00:06:26.597 1+0 records out 00:06:26.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519614 s, 7.9 MB/s 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:26.597 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd0", 00:06:26.857 "bdev_name": "Nvme0n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd1", 00:06:26.857 "bdev_name": "Nvme1n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd2", 00:06:26.857 "bdev_name": "Nvme2n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd3", 00:06:26.857 "bdev_name": "Nvme2n2" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd4", 00:06:26.857 "bdev_name": "Nvme2n3" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd5", 00:06:26.857 "bdev_name": "Nvme3n1" 00:06:26.857 } 00:06:26.857 ]' 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd0", 00:06:26.857 "bdev_name": "Nvme0n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd1", 00:06:26.857 "bdev_name": "Nvme1n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd2", 00:06:26.857 "bdev_name": "Nvme2n1" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd3", 00:06:26.857 "bdev_name": "Nvme2n2" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd4", 00:06:26.857 "bdev_name": "Nvme2n3" 00:06:26.857 }, 00:06:26.857 { 00:06:26.857 "nbd_device": "/dev/nbd5", 00:06:26.857 "bdev_name": "Nvme3n1" 00:06:26.857 } 00:06:26.857 ]' 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.857 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.117 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.376 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.376 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.376 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.376 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.377 19:04:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.638 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.900 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.162 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.424 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.424 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.424 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.424 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.425 19:04:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:28.687 /dev/nbd0 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.687 1+0 records in 00:06:28.687 1+0 records out 00:06:28.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00290022 s, 1.4 MB/s 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.687 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:28.949 /dev/nbd1 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.949 1+0 records in 00:06:28.949 1+0 records out 00:06:28.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129627 s, 3.2 MB/s 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:28.949 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:29.210 /dev/nbd10 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.210 1+0 records in 00:06:29.210 1+0 records out 00:06:29.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101804 s, 4.0 MB/s 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:29.210 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:29.472 /dev/nbd11 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.472 1+0 records in 00:06:29.472 1+0 records out 00:06:29.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000925146 s, 4.4 MB/s 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:29.472 19:04:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:29.737 /dev/nbd12 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.737 1+0 records in 00:06:29.737 1+0 records out 00:06:29.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000988925 s, 4.1 MB/s 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:29.737 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.738 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:29.738 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:30.001 /dev/nbd13 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.001 1+0 records in 00:06:30.001 1+0 records out 00:06:30.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111015 s, 3.7 MB/s 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.001 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd0", 00:06:30.262 "bdev_name": "Nvme0n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd1", 00:06:30.262 "bdev_name": "Nvme1n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd10", 00:06:30.262 "bdev_name": "Nvme2n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd11", 00:06:30.262 "bdev_name": "Nvme2n2" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd12", 00:06:30.262 "bdev_name": "Nvme2n3" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd13", 00:06:30.262 "bdev_name": "Nvme3n1" 00:06:30.262 } 00:06:30.262 ]' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd0", 00:06:30.262 "bdev_name": "Nvme0n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd1", 00:06:30.262 "bdev_name": "Nvme1n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd10", 00:06:30.262 "bdev_name": "Nvme2n1" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd11", 00:06:30.262 "bdev_name": "Nvme2n2" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd12", 00:06:30.262 "bdev_name": "Nvme2n3" 00:06:30.262 }, 00:06:30.262 { 00:06:30.262 "nbd_device": "/dev/nbd13", 00:06:30.262 "bdev_name": "Nvme3n1" 00:06:30.262 } 00:06:30.262 ]' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.262 /dev/nbd1 00:06:30.262 /dev/nbd10 00:06:30.262 /dev/nbd11 00:06:30.262 /dev/nbd12 00:06:30.262 /dev/nbd13' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.262 /dev/nbd1 00:06:30.262 /dev/nbd10 00:06:30.262 /dev/nbd11 00:06:30.262 /dev/nbd12 00:06:30.262 /dev/nbd13' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:30.262 256+0 records in 00:06:30.262 256+0 records out 00:06:30.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065564 s, 160 MB/s 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.262 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.523 256+0 records in 00:06:30.523 256+0 records out 00:06:30.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204893 s, 5.1 MB/s 00:06:30.523 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.523 19:04:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.786 256+0 records in 00:06:30.786 256+0 records out 00:06:30.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239418 s, 4.4 MB/s 00:06:30.786 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.786 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:31.048 256+0 records in 00:06:31.048 256+0 records out 00:06:31.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.26181 s, 4.0 MB/s 00:06:31.048 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.048 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:31.310 256+0 records in 00:06:31.310 256+0 records out 00:06:31.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265858 s, 3.9 MB/s 00:06:31.310 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.310 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:31.572 256+0 records in 00:06:31.572 256+0 records out 00:06:31.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222152 s, 4.7 MB/s 00:06:31.572 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.572 19:04:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:31.833 256+0 records in 00:06:31.833 256+0 records out 00:06:31.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245948 s, 4.3 MB/s 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:31.833 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.834 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.834 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.834 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:31.834 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.834 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.093 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.403 19:04:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.664 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.925 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:33.185 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.186 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:33.444 19:04:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:33.702 malloc_lvol_verify 00:06:33.702 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:33.959 b2cd4cd1-80a3-4256-8335-d5cc6d53a5e9 00:06:33.959 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:33.959 b5cefadb-7161-45f6-8e1e-5b3837d751ca 00:06:33.959 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:34.220 /dev/nbd0 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:34.220 mke2fs 1.47.0 (5-Feb-2023) 00:06:34.220 Discarding device blocks: 0/4096 done 00:06:34.220 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:34.220 00:06:34.220 Allocating group tables: 0/1 done 00:06:34.220 Writing inode tables: 0/1 done 00:06:34.220 Creating journal (1024 blocks): done 00:06:34.220 Writing superblocks and filesystem accounting information: 0/1 done 00:06:34.220 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.220 19:04:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60006 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60006 ']' 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60006 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60006 00:06:34.481 killing process with pid 60006 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60006' 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60006 00:06:34.481 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60006 00:06:35.424 19:04:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:35.424 00:06:35.424 real 0m11.050s 00:06:35.424 user 0m14.916s 00:06:35.424 sys 0m3.623s 00:06:35.424 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.424 19:04:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.424 ************************************ 00:06:35.424 END TEST bdev_nbd 00:06:35.424 ************************************ 00:06:35.424 19:04:44 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:35.424 19:04:44 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:35.424 skipping fio tests on NVMe due to multi-ns failures. 00:06:35.424 19:04:44 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:35.424 19:04:44 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:35.424 19:04:44 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:35.424 19:04:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:35.424 19:04:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.424 19:04:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.424 ************************************ 00:06:35.424 START TEST bdev_verify 00:06:35.424 ************************************ 00:06:35.424 19:04:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:35.424 [2024-11-27 19:04:45.050954] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:35.424 [2024-11-27 19:04:45.051093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60396 ] 00:06:35.683 [2024-11-27 19:04:45.216314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.944 [2024-11-27 19:04:45.324386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.944 [2024-11-27 19:04:45.324472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.515 Running I/O for 5 seconds... 00:06:38.468 19840.00 IOPS, 77.50 MiB/s [2024-11-27T19:04:49.485Z] 18624.00 IOPS, 72.75 MiB/s [2024-11-27T19:04:50.428Z] 18901.33 IOPS, 73.83 MiB/s [2024-11-27T19:04:51.376Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-27T19:04:51.376Z] 18905.60 IOPS, 73.85 MiB/s 00:06:41.741 Latency(us) 00:06:41.741 [2024-11-27T19:04:51.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.741 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0xbd0bd 00:06:41.741 Nvme0n1 : 5.07 1553.60 6.07 0.00 0.00 81999.49 12855.14 77836.60 00:06:41.741 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:41.741 Nvme0n1 : 5.06 1542.99 6.03 0.00 0.00 82673.59 20064.10 87112.47 00:06:41.741 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0xa0000 00:06:41.741 Nvme1n1 : 5.07 1553.08 6.07 0.00 0.00 81925.37 13409.67 71787.13 00:06:41.741 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0xa0000 length 0xa0000 00:06:41.741 Nvme1n1 : 5.06 1542.44 6.03 0.00 0.00 82405.54 20467.40 72593.72 00:06:41.741 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0x80000 00:06:41.741 Nvme2n1 : 5.07 1552.55 6.06 0.00 0.00 81795.64 11141.12 70173.93 00:06:41.741 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x80000 length 0x80000 00:06:41.741 Nvme2n1 : 5.06 1541.91 6.02 0.00 0.00 82213.61 22282.24 70173.93 00:06:41.741 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0x80000 00:06:41.741 Nvme2n2 : 5.09 1559.94 6.09 0.00 0.00 81451.89 12754.31 61704.66 00:06:41.741 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x80000 length 0x80000 00:06:41.741 Nvme2n2 : 5.08 1548.64 6.05 0.00 0.00 81700.61 7612.26 66544.25 00:06:41.741 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0x80000 00:06:41.741 Nvme2n3 : 5.09 1559.52 6.09 0.00 0.00 81249.44 12653.49 61704.66 00:06:41.741 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x80000 length 0x80000 00:06:41.741 Nvme2n3 : 5.10 1556.79 6.08 0.00 0.00 81225.15 11544.42 66140.95 00:06:41.741 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x0 length 0x20000 00:06:41.741 Nvme3n1 : 5.09 1559.09 6.09 0.00 0.00 81142.32 11998.13 65334.35 00:06:41.741 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.741 Verification LBA range: start 0x20000 length 0x20000 00:06:41.741 Nvme3n1 : 5.10 1556.34 6.08 0.00 0.00 81112.35 11796.48 68157.44 00:06:41.741 [2024-11-27T19:04:51.376Z] =================================================================================================================== 00:06:41.741 [2024-11-27T19:04:51.376Z] Total : 18626.90 72.76 0.00 0.00 81738.14 7612.26 87112.47 00:06:43.131 00:06:43.131 real 0m7.442s 00:06:43.131 user 0m13.760s 00:06:43.131 sys 0m0.299s 00:06:43.131 19:04:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.131 ************************************ 00:06:43.131 END TEST bdev_verify 00:06:43.131 ************************************ 00:06:43.131 19:04:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:43.131 19:04:52 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:43.131 19:04:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:43.131 19:04:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.131 19:04:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.131 ************************************ 00:06:43.131 START TEST bdev_verify_big_io 00:06:43.131 ************************************ 00:06:43.131 19:04:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:43.131 [2024-11-27 19:04:52.571896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:43.131 [2024-11-27 19:04:52.572066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:06:43.131 [2024-11-27 19:04:52.740537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.393 [2024-11-27 19:04:52.894351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.393 [2024-11-27 19:04:52.894442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.339 Running I/O for 5 seconds... 00:06:49.425 1055.00 IOPS, 65.94 MiB/s [2024-11-27T19:04:59.627Z] 2361.00 IOPS, 147.56 MiB/s [2024-11-27T19:04:59.887Z] 3063.00 IOPS, 191.44 MiB/s 00:06:50.252 Latency(us) 00:06:50.252 [2024-11-27T19:04:59.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0xbd0b 00:06:50.252 Nvme0n1 : 5.63 122.65 7.67 0.00 0.00 990261.67 30852.33 1464780.01 00:06:50.252 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:50.252 Nvme0n1 : 5.82 120.93 7.56 0.00 0.00 1020299.28 49807.36 1019538.51 00:06:50.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0xa000 00:06:50.252 Nvme1n1 : 5.63 125.99 7.87 0.00 0.00 946574.05 51420.55 1490591.11 00:06:50.252 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0xa000 length 0xa000 00:06:50.252 Nvme1n1 : 5.82 120.88 7.55 0.00 0.00 992973.23 84692.68 903388.55 00:06:50.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0x8000 00:06:50.252 Nvme2n1 : 5.74 130.22 8.14 0.00 0.00 887735.50 71787.13 1516402.22 00:06:50.252 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x8000 length 0x8000 00:06:50.252 Nvme2n1 : 5.91 126.14 7.88 0.00 0.00 932929.08 43959.53 909841.33 00:06:50.252 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0x8000 00:06:50.252 Nvme2n2 : 5.88 142.44 8.90 0.00 0.00 786744.95 50613.96 1135688.47 00:06:50.252 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x8000 length 0x8000 00:06:50.252 Nvme2n2 : 5.91 126.45 7.90 0.00 0.00 901288.24 44766.13 929199.66 00:06:50.252 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0x8000 00:06:50.252 Nvme2n3 : 5.92 148.55 9.28 0.00 0.00 732544.86 14317.10 1619646.62 00:06:50.252 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x8000 length 0x8000 00:06:50.252 Nvme2n3 : 5.91 129.95 8.12 0.00 0.00 852351.47 34683.67 942105.21 00:06:50.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x0 length 0x2000 00:06:50.252 Nvme3n1 : 5.98 197.78 12.36 0.00 0.00 535466.48 422.20 1200216.22 00:06:50.252 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.252 Verification LBA range: start 0x2000 length 0x2000 00:06:50.252 Nvme3n1 : 5.92 140.52 8.78 0.00 0.00 764902.16 3906.95 948557.98 00:06:50.252 [2024-11-27T19:04:59.887Z] =================================================================================================================== 00:06:50.252 [2024-11-27T19:04:59.887Z] Total : 1632.51 102.03 0.00 0.00 841283.62 422.20 1619646.62 00:06:51.635 00:06:51.635 real 0m8.681s 00:06:51.635 user 0m16.219s 00:06:51.635 sys 0m0.372s 00:06:51.636 ************************************ 00:06:51.636 END TEST bdev_verify_big_io 00:06:51.636 ************************************ 00:06:51.636 19:05:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.636 19:05:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:51.636 19:05:01 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.636 19:05:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.636 19:05:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.636 19:05:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.636 ************************************ 00:06:51.636 START TEST bdev_write_zeroes 00:06:51.636 ************************************ 00:06:51.636 19:05:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.897 [2024-11-27 19:05:01.286423] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:51.897 [2024-11-27 19:05:01.286555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:06:51.897 [2024-11-27 19:05:01.448350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.158 [2024-11-27 19:05:01.586416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.733 Running I/O for 1 seconds... 00:06:53.669 61824.00 IOPS, 241.50 MiB/s 00:06:53.669 Latency(us) 00:06:53.669 [2024-11-27T19:05:03.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.669 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme0n1 : 1.02 10296.05 40.22 0.00 0.00 12405.08 5419.32 27021.00 00:06:53.669 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme1n1 : 1.02 10283.88 40.17 0.00 0.00 12398.95 9679.16 20870.70 00:06:53.669 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme2n1 : 1.02 10271.59 40.12 0.00 0.00 12358.02 8771.74 21778.12 00:06:53.669 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme2n2 : 1.02 10259.89 40.08 0.00 0.00 12348.30 8620.50 21878.94 00:06:53.669 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme2n3 : 1.02 10248.33 40.03 0.00 0.00 12339.43 8418.86 21072.34 00:06:53.669 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.669 Nvme3n1 : 1.03 10235.84 39.98 0.00 0.00 12325.26 7208.96 20870.70 00:06:53.669 [2024-11-27T19:05:03.304Z] =================================================================================================================== 00:06:53.669 [2024-11-27T19:05:03.304Z] Total : 61595.58 240.61 0.00 0.00 12362.51 5419.32 27021.00 00:06:54.612 00:06:54.612 real 0m2.966s 00:06:54.612 user 0m2.566s 00:06:54.612 sys 0m0.279s 00:06:54.612 19:05:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.612 ************************************ 00:06:54.612 END TEST bdev_write_zeroes 00:06:54.612 ************************************ 00:06:54.612 19:05:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:54.612 19:05:04 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.612 19:05:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:54.612 19:05:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.612 19:05:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.872 ************************************ 00:06:54.872 START TEST bdev_json_nonenclosed 00:06:54.872 ************************************ 00:06:54.872 19:05:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.872 [2024-11-27 19:05:04.330417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:54.872 [2024-11-27 19:05:04.330565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:06:54.872 [2024-11-27 19:05:04.496981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.133 [2024-11-27 19:05:04.644599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.133 [2024-11-27 19:05:04.644718] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:55.133 [2024-11-27 19:05:04.644740] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.133 [2024-11-27 19:05:04.644753] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.395 00:06:55.395 real 0m0.609s 00:06:55.395 user 0m0.362s 00:06:55.395 sys 0m0.141s 00:06:55.395 19:05:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.395 ************************************ 00:06:55.395 END TEST bdev_json_nonenclosed 00:06:55.395 ************************************ 00:06:55.395 19:05:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:55.395 19:05:04 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.395 19:05:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:55.395 19:05:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.395 19:05:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.395 ************************************ 00:06:55.395 START TEST bdev_json_nonarray 00:06:55.395 ************************************ 00:06:55.395 19:05:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.395 [2024-11-27 19:05:05.005223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:55.395 [2024-11-27 19:05:05.005436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ] 00:06:55.657 [2024-11-27 19:05:05.175331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.919 [2024-11-27 19:05:05.322273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.919 [2024-11-27 19:05:05.322423] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:55.919 [2024-11-27 19:05:05.322445] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.919 [2024-11-27 19:05:05.322457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.919 00:06:55.919 real 0m0.613s 00:06:55.919 user 0m0.365s 00:06:55.919 sys 0m0.142s 00:06:55.919 19:05:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.919 ************************************ 00:06:55.919 END TEST bdev_json_nonarray 00:06:55.919 ************************************ 00:06:55.919 19:05:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:56.180 19:05:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:56.180 00:06:56.180 real 0m38.272s 00:06:56.180 user 0m57.611s 00:06:56.180 sys 0m6.218s 00:06:56.180 ************************************ 00:06:56.180 END TEST blockdev_nvme 00:06:56.180 ************************************ 00:06:56.180 19:05:05 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.180 19:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 19:05:05 -- spdk/autotest.sh@209 -- # uname -s 00:06:56.180 19:05:05 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:56.180 19:05:05 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.180 19:05:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.180 19:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.180 19:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 ************************************ 00:06:56.180 START TEST blockdev_nvme_gpt 00:06:56.180 ************************************ 00:06:56.180 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.180 * Looking for test storage... 00:06:56.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.180 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.180 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.180 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.180 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:56.180 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.442 19:05:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.442 --rc genhtml_branch_coverage=1 00:06:56.442 --rc genhtml_function_coverage=1 00:06:56.442 --rc genhtml_legend=1 00:06:56.442 --rc geninfo_all_blocks=1 00:06:56.442 --rc geninfo_unexecuted_blocks=1 00:06:56.442 00:06:56.442 ' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:56.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60775 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60775 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60775 ']' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.442 19:05:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.442 19:05:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.442 [2024-11-27 19:05:05.933833] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:56.442 [2024-11-27 19:05:05.934020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60775 ] 00:06:56.704 [2024-11-27 19:05:06.100971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.704 [2024-11-27 19:05:06.246413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.667 19:05:07 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.667 19:05:07 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:57.667 19:05:07 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:57.667 19:05:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:57.667 19:05:07 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:57.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.928 Waiting for block devices as requested 00:06:57.928 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.188 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.188 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.448 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.746 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:03.746 BYT; 00:07:03.746 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:03.746 BYT; 00:07:03.746 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:03.746 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:03.746 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:03.746 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.747 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.747 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.747 19:05:12 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.747 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.747 19:05:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:04.681 The operation has completed successfully. 00:07:04.681 19:05:14 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:05.615 The operation has completed successfully. 00:07:05.615 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.440 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.440 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.440 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.440 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.440 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:06.440 19:05:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.440 19:05:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 [] 00:07:06.440 19:05:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.440 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:06.440 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:06.440 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.440 19:05:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.440 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.440 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.440 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.699 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.699 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:06.699 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.699 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.958 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:06.958 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:06.959 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e1351e8b-1e3a-4629-a1d8-34f873a93917"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e1351e8b-1e3a-4629-a1d8-34f873a93917",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1bc9b022-7992-4d32-a07b-e0542cd19ba5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1bc9b022-7992-4d32-a07b-e0542cd19ba5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cf13450a-8d49-4fa4-933f-982e467ce3ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cf13450a-8d49-4fa4-933f-982e467ce3ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "447e1d94-f6b9-487c-a251-319d195c6cf6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "447e1d94-f6b9-487c-a251-319d195c6cf6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "82e16d4f-87c0-4f8e-94f2-ae583deed9e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "82e16d4f-87c0-4f8e-94f2-ae583deed9e3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.959 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.959 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:06.959 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:06.959 19:05:16 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60775 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60775 ']' 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60775 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60775 00:07:06.959 killing process with pid 60775 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60775' 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60775 00:07:06.959 19:05:16 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60775 00:07:08.335 19:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:08.335 19:05:17 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.335 19:05:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:08.335 19:05:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.335 19:05:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.335 ************************************ 00:07:08.335 START TEST bdev_hello_world 00:07:08.335 ************************************ 00:07:08.335 19:05:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.335 [2024-11-27 19:05:17.837676] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:08.335 [2024-11-27 19:05:17.837810] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61389 ] 00:07:08.594 [2024-11-27 19:05:17.995726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.594 [2024-11-27 19:05:18.099811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.162 [2024-11-27 19:05:18.616638] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:09.162 [2024-11-27 19:05:18.616681] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:09.162 [2024-11-27 19:05:18.616697] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:09.162 [2024-11-27 19:05:18.618832] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:09.162 [2024-11-27 19:05:18.619788] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:09.162 [2024-11-27 19:05:18.619917] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:09.162 [2024-11-27 19:05:18.620420] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:09.162 00:07:09.162 [2024-11-27 19:05:18.620443] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:09.729 00:07:09.729 real 0m1.466s 00:07:09.729 user 0m1.157s 00:07:09.729 sys 0m0.203s 00:07:09.729 ************************************ 00:07:09.729 END TEST bdev_hello_world 00:07:09.729 ************************************ 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:09.729 19:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:09.729 19:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.729 19:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.729 19:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.729 ************************************ 00:07:09.729 START TEST bdev_bounds 00:07:09.729 ************************************ 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:09.729 Process bdevio pid: 61426 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61426 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61426' 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61426 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61426 ']' 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.729 19:05:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.987 [2024-11-27 19:05:19.370547] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:09.987 [2024-11-27 19:05:19.370656] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61426 ] 00:07:09.987 [2024-11-27 19:05:19.524090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.987 [2024-11-27 19:05:19.617734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.987 [2024-11-27 19:05:19.617920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.987 [2024-11-27 19:05:19.617937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.920 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.920 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:10.920 19:05:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:10.920 I/O targets: 00:07:10.920 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:10.920 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:10.920 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:10.920 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.920 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.920 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.920 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:10.920 00:07:10.920 00:07:10.920 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.920 http://cunit.sourceforge.net/ 00:07:10.920 00:07:10.920 00:07:10.920 Suite: bdevio tests on: Nvme3n1 00:07:10.920 Test: blockdev write read block ...passed 00:07:10.920 Test: blockdev write zeroes read block ...passed 00:07:10.920 Test: blockdev write zeroes read no split ...passed 00:07:10.920 Test: blockdev write zeroes read split ...passed 00:07:10.920 Test: blockdev write zeroes read split partial ...passed 00:07:10.920 Test: blockdev reset ...[2024-11-27 19:05:20.327235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:10.920 [2024-11-27 19:05:20.329945] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:10.920 passed 00:07:10.920 Test: blockdev write read 8 blocks ...passed 00:07:10.920 Test: blockdev write read size > 128k ...passed 00:07:10.920 Test: blockdev write read invalid size ...passed 00:07:10.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.920 Test: blockdev write read max offset ...passed 00:07:10.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.920 Test: blockdev writev readv 8 blocks ...passed 00:07:10.920 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.920 Test: blockdev writev readv block ...passed 00:07:10.920 Test: blockdev writev readv size > 128k ...passed 00:07:10.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.920 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af404000 len:0x1000 00:07:10.920 [2024-11-27 19:05:20.336482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.920 passed 00:07:10.920 Test: blockdev nvme passthru rw ...passed 00:07:10.920 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:05:20.337053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.920 passed 00:07:10.920 Test: blockdev nvme admin passthru ...[2024-11-27 19:05:20.337079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.920 passed 00:07:10.920 Test: blockdev copy ...passed 00:07:10.920 Suite: bdevio tests on: Nvme2n3 00:07:10.920 Test: blockdev write read block ...passed 00:07:10.920 Test: blockdev write zeroes read block ...passed 00:07:10.920 Test: blockdev write zeroes read no split ...passed 00:07:10.920 Test: blockdev write zeroes read split ...passed 00:07:10.920 Test: blockdev write zeroes read split partial ...passed 00:07:10.920 Test: blockdev reset ...[2024-11-27 19:05:20.380751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.920 [2024-11-27 19:05:20.384018] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.920 passed 00:07:10.920 Test: blockdev write read 8 blocks ...passed 00:07:10.920 Test: blockdev write read size > 128k ...passed 00:07:10.920 Test: blockdev write read invalid size ...passed 00:07:10.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.920 Test: blockdev write read max offset ...passed 00:07:10.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.920 Test: blockdev writev readv 8 blocks ...passed 00:07:10.920 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.920 Test: blockdev writev readv block ...passed 00:07:10.920 Test: blockdev writev readv size > 128k ...passed 00:07:10.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.921 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.390137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af402000 len:0x1000 00:07:10.921 [2024-11-27 19:05:20.390181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme passthru rw ...passed 00:07:10.921 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:05:20.390699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme admin passthru ...[2024-11-27 19:05:20.390721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev copy ...passed 00:07:10.921 Suite: bdevio tests on: Nvme2n2 00:07:10.921 Test: blockdev write read block ...passed 00:07:10.921 Test: blockdev write zeroes read block ...passed 00:07:10.921 Test: blockdev write zeroes read no split ...passed 00:07:10.921 Test: blockdev write zeroes read split ...passed 00:07:10.921 Test: blockdev write zeroes read split partial ...passed 00:07:10.921 Test: blockdev reset ...[2024-11-27 19:05:20.431987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.921 [2024-11-27 19:05:20.434934] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.921 passed 00:07:10.921 Test: blockdev write read 8 blocks ...passed 00:07:10.921 Test: blockdev write read size > 128k ...passed 00:07:10.921 Test: blockdev write read invalid size ...passed 00:07:10.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.921 Test: blockdev write read max offset ...passed 00:07:10.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.921 Test: blockdev writev readv 8 blocks ...passed 00:07:10.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.921 Test: blockdev writev readv block ...passed 00:07:10.921 Test: blockdev writev readv size > 128k ...passed 00:07:10.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.921 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.440977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1638000 len:0x1000 00:07:10.921 [2024-11-27 19:05:20.441118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme passthru rw ...passed 00:07:10.921 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:05:20.442164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.921 [2024-11-27 19:05:20.442224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme admin passthru ...passed 00:07:10.921 Test: blockdev copy ...passed 00:07:10.921 Suite: bdevio tests on: Nvme2n1 00:07:10.921 Test: blockdev write read block ...passed 00:07:10.921 Test: blockdev write zeroes read block ...passed 00:07:10.921 Test: blockdev write zeroes read no split ...passed 00:07:10.921 Test: blockdev write zeroes read split ...passed 00:07:10.921 Test: blockdev write zeroes read split partial ...passed 00:07:10.921 Test: blockdev reset ...[2024-11-27 19:05:20.497398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.921 [2024-11-27 19:05:20.500421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.921 passed 00:07:10.921 Test: blockdev write read 8 blocks ...passed 00:07:10.921 Test: blockdev write read size > 128k ...passed 00:07:10.921 Test: blockdev write read invalid size ...passed 00:07:10.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.921 Test: blockdev write read max offset ...passed 00:07:10.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.921 Test: blockdev writev readv 8 blocks ...passed 00:07:10.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.921 Test: blockdev writev readv block ...passed 00:07:10.921 Test: blockdev writev readv size > 128k ...passed 00:07:10.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.921 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.505956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1634000 len:0x1000 00:07:10.921 [2024-11-27 19:05:20.506009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme passthru rw ...passed 00:07:10.921 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:05:20.506512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.921 passed 00:07:10.921 Test: blockdev nvme admin passthru ...[2024-11-27 19:05:20.506538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.921 passed 00:07:10.921 Test: blockdev copy ...passed 00:07:10.921 Suite: bdevio tests on: Nvme1n1p2 00:07:10.921 Test: blockdev write read block ...passed 00:07:10.921 Test: blockdev write zeroes read block ...passed 00:07:10.921 Test: blockdev write zeroes read no split ...passed 00:07:10.921 Test: blockdev write zeroes read split ...passed 00:07:10.921 Test: blockdev write zeroes read split partial ...passed 00:07:10.921 Test: blockdev reset ...[2024-11-27 19:05:20.547948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:10.921 [2024-11-27 19:05:20.550602] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:10.921 passed 00:07:10.921 Test: blockdev write read 8 blocks ...passed 00:07:10.921 Test: blockdev write read size > 128k ...passed 00:07:10.921 Test: blockdev write read invalid size ...passed 00:07:10.921 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.921 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.921 Test: blockdev write read max offset ...passed 00:07:10.921 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.921 Test: blockdev writev readv 8 blocks ...passed 00:07:10.921 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.180 Test: blockdev writev readv block ...passed 00:07:11.180 Test: blockdev writev readv size > 128k ...passed 00:07:11.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.180 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.556840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1630000 len:0x1000 00:07:11.180 [2024-11-27 19:05:20.556886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.180 passed 00:07:11.180 Test: blockdev nvme passthru rw ...passed 00:07:11.180 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.180 Test: blockdev nvme admin passthru ...passed 00:07:11.180 Test: blockdev copy ...passed 00:07:11.180 Suite: bdevio tests on: Nvme1n1p1 00:07:11.180 Test: blockdev write read block ...passed 00:07:11.180 Test: blockdev write zeroes read block ...passed 00:07:11.180 Test: blockdev write zeroes read no split ...passed 00:07:11.180 Test: blockdev write zeroes read split ...passed 00:07:11.180 Test: blockdev write zeroes read split partial ...passed 00:07:11.180 Test: blockdev reset ...[2024-11-27 19:05:20.600413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:11.180 [2024-11-27 19:05:20.603402] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:11.180 passed 00:07:11.180 Test: blockdev write read 8 blocks ...passed 00:07:11.180 Test: blockdev write read size > 128k ...passed 00:07:11.180 Test: blockdev write read invalid size ...passed 00:07:11.180 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.180 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.180 Test: blockdev write read max offset ...passed 00:07:11.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.180 Test: blockdev writev readv 8 blocks ...passed 00:07:11.180 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.180 Test: blockdev writev readv block ...passed 00:07:11.180 Test: blockdev writev readv size > 128k ...passed 00:07:11.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.180 Test: blockdev comparev and writev ...[2024-11-27 19:05:20.617721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2afe0e000 len:0x1000 00:07:11.180 [2024-11-27 19:05:20.617785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.180 passed 00:07:11.180 Test: blockdev nvme passthru rw ...passed 00:07:11.180 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.180 Test: blockdev nvme admin passthru ...passed 00:07:11.180 Test: blockdev copy ...passed 00:07:11.180 Suite: bdevio tests on: Nvme0n1 00:07:11.180 Test: blockdev write read block ...passed 00:07:11.180 Test: blockdev write zeroes read block ...passed 00:07:11.180 Test: blockdev write zeroes read no split ...passed 00:07:11.180 Test: blockdev write zeroes read split ...passed 00:07:11.180 Test: blockdev write zeroes read split partial ...passed 00:07:11.180 Test: blockdev reset ...[2024-11-27 19:05:20.666506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:11.180 passed 00:07:11.180 Test: blockdev write read 8 blocks ...[2024-11-27 19:05:20.669927] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:11.180 passed 00:07:11.180 Test: blockdev write read size > 128k ...passed 00:07:11.180 Test: blockdev write read invalid size ...passed 00:07:11.180 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.180 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.180 Test: blockdev write read max offset ...passed 00:07:11.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.180 Test: blockdev writev readv 8 blocks ...passed 00:07:11.180 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.180 Test: blockdev writev readv block ...passed 00:07:11.180 Test: blockdev writev readv size > 128k ...passed 00:07:11.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.180 Test: blockdev comparev and writev ...passed 00:07:11.180 Test: blockdev nvme passthru rw ...[2024-11-27 19:05:20.682202] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:11.180 separate metadata which is not supported yet. 00:07:11.180 passed 00:07:11.180 Test: blockdev nvme passthru vendor specific ...[2024-11-27 19:05:20.683586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:11.180 passed 00:07:11.180 Test: blockdev nvme admin passthru ...[2024-11-27 19:05:20.683632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:11.180 passed 00:07:11.180 Test: blockdev copy ...passed 00:07:11.180 00:07:11.180 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.180 suites 7 7 n/a 0 0 00:07:11.180 tests 161 161 161 0 0 00:07:11.180 asserts 1025 1025 1025 0 n/a 00:07:11.180 00:07:11.180 Elapsed time = 1.079 seconds 00:07:11.180 0 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61426 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61426 ']' 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61426 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61426 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.180 killing process with pid 61426 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61426' 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61426 00:07:11.180 19:05:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61426 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:11.749 00:07:11.749 real 0m2.009s 00:07:11.749 user 0m5.043s 00:07:11.749 sys 0m0.313s 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 ************************************ 00:07:11.749 END TEST bdev_bounds 00:07:11.749 ************************************ 00:07:11.749 19:05:21 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.749 19:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.749 19:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.749 19:05:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 ************************************ 00:07:11.749 START TEST bdev_nbd 00:07:11.749 ************************************ 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61481 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61481 /var/tmp/spdk-nbd.sock 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61481 ']' 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 19:05:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.011 [2024-11-27 19:05:21.418911] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:12.011 [2024-11-27 19:05:21.419042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.011 [2024-11-27 19:05:21.581705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.272 [2024-11-27 19:05:21.696286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.844 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.103 1+0 records in 00:07:13.103 1+0 records out 00:07:13.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469432 s, 8.7 MB/s 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.103 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:13.363 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:13.363 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:13.363 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:13.363 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.363 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.364 1+0 records in 00:07:13.364 1+0 records out 00:07:13.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505219 s, 8.1 MB/s 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.364 1+0 records in 00:07:13.364 1+0 records out 00:07:13.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480173 s, 8.5 MB/s 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.364 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.651 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.651 19:05:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.651 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.651 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.651 19:05:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.651 1+0 records in 00:07:13.651 1+0 records out 00:07:13.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428315 s, 9.6 MB/s 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.651 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.910 1+0 records in 00:07:13.910 1+0 records out 00:07:13.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377103 s, 10.9 MB/s 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.910 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.169 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.170 1+0 records in 00:07:14.170 1+0 records out 00:07:14.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044376 s, 9.2 MB/s 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.170 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:14.427 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:14.427 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:14.427 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:14.427 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:14.427 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.428 1+0 records in 00:07:14.428 1+0 records out 00:07:14.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379822 s, 10.8 MB/s 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.428 19:05:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd0", 00:07:14.686 "bdev_name": "Nvme0n1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd1", 00:07:14.686 "bdev_name": "Nvme1n1p1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd2", 00:07:14.686 "bdev_name": "Nvme1n1p2" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd3", 00:07:14.686 "bdev_name": "Nvme2n1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd4", 00:07:14.686 "bdev_name": "Nvme2n2" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd5", 00:07:14.686 "bdev_name": "Nvme2n3" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd6", 00:07:14.686 "bdev_name": "Nvme3n1" 00:07:14.686 } 00:07:14.686 ]' 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd0", 00:07:14.686 "bdev_name": "Nvme0n1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd1", 00:07:14.686 "bdev_name": "Nvme1n1p1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd2", 00:07:14.686 "bdev_name": "Nvme1n1p2" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd3", 00:07:14.686 "bdev_name": "Nvme2n1" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd4", 00:07:14.686 "bdev_name": "Nvme2n2" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd5", 00:07:14.686 "bdev_name": "Nvme2n3" 00:07:14.686 }, 00:07:14.686 { 00:07:14.686 "nbd_device": "/dev/nbd6", 00:07:14.686 "bdev_name": "Nvme3n1" 00:07:14.686 } 00:07:14.686 ]' 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.686 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.945 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.203 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.461 19:05:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.719 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.977 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.236 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.496 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.496 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.496 19:05:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.496 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:16.755 /dev/nbd0 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.755 1+0 records in 00:07:16.755 1+0 records out 00:07:16.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049823 s, 8.2 MB/s 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.755 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:17.013 /dev/nbd1 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.013 1+0 records in 00:07:17.013 1+0 records out 00:07:17.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283053 s, 14.5 MB/s 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.013 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:17.271 /dev/nbd10 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.271 1+0 records in 00:07:17.271 1+0 records out 00:07:17.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978428 s, 4.2 MB/s 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.271 19:05:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:17.529 /dev/nbd11 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.529 1+0 records in 00:07:17.529 1+0 records out 00:07:17.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735436 s, 5.6 MB/s 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.529 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:17.788 /dev/nbd12 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.788 1+0 records in 00:07:17.788 1+0 records out 00:07:17.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638939 s, 6.4 MB/s 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.788 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:18.047 /dev/nbd13 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.047 1+0 records in 00:07:18.047 1+0 records out 00:07:18.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558719 s, 7.3 MB/s 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.047 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:18.306 /dev/nbd14 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.306 1+0 records in 00:07:18.306 1+0 records out 00:07:18.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440217 s, 9.3 MB/s 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.306 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.565 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.565 { 00:07:18.565 "nbd_device": "/dev/nbd0", 00:07:18.565 "bdev_name": "Nvme0n1" 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "nbd_device": "/dev/nbd1", 00:07:18.565 "bdev_name": "Nvme1n1p1" 00:07:18.565 }, 00:07:18.565 { 00:07:18.565 "nbd_device": "/dev/nbd10", 00:07:18.566 "bdev_name": "Nvme1n1p2" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd11", 00:07:18.566 "bdev_name": "Nvme2n1" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd12", 00:07:18.566 "bdev_name": "Nvme2n2" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd13", 00:07:18.566 "bdev_name": "Nvme2n3" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd14", 00:07:18.566 "bdev_name": "Nvme3n1" 00:07:18.566 } 00:07:18.566 ]' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd0", 00:07:18.566 "bdev_name": "Nvme0n1" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd1", 00:07:18.566 "bdev_name": "Nvme1n1p1" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd10", 00:07:18.566 "bdev_name": "Nvme1n1p2" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd11", 00:07:18.566 "bdev_name": "Nvme2n1" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd12", 00:07:18.566 "bdev_name": "Nvme2n2" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd13", 00:07:18.566 "bdev_name": "Nvme2n3" 00:07:18.566 }, 00:07:18.566 { 00:07:18.566 "nbd_device": "/dev/nbd14", 00:07:18.566 "bdev_name": "Nvme3n1" 00:07:18.566 } 00:07:18.566 ]' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.566 /dev/nbd1 00:07:18.566 /dev/nbd10 00:07:18.566 /dev/nbd11 00:07:18.566 /dev/nbd12 00:07:18.566 /dev/nbd13 00:07:18.566 /dev/nbd14' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.566 /dev/nbd1 00:07:18.566 /dev/nbd10 00:07:18.566 /dev/nbd11 00:07:18.566 /dev/nbd12 00:07:18.566 /dev/nbd13 00:07:18.566 /dev/nbd14' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:18.566 256+0 records in 00:07:18.566 256+0 records out 00:07:18.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424783 s, 247 MB/s 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.566 19:05:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.566 256+0 records in 00:07:18.566 256+0 records out 00:07:18.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821113 s, 12.8 MB/s 00:07:18.566 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.566 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.825 256+0 records in 00:07:18.825 256+0 records out 00:07:18.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131536 s, 8.0 MB/s 00:07:18.825 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.825 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:18.825 256+0 records in 00:07:18.825 256+0 records out 00:07:18.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223604 s, 4.7 MB/s 00:07:18.825 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.825 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:19.086 256+0 records in 00:07:19.086 256+0 records out 00:07:19.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2241 s, 4.7 MB/s 00:07:19.086 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.086 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:19.348 256+0 records in 00:07:19.348 256+0 records out 00:07:19.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116059 s, 9.0 MB/s 00:07:19.348 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.348 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:19.348 256+0 records in 00:07:19.348 256+0 records out 00:07:19.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128832 s, 8.1 MB/s 00:07:19.348 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.348 19:05:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:19.609 256+0 records in 00:07:19.609 256+0 records out 00:07:19.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222579 s, 4.7 MB/s 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.609 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.610 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.870 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.129 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.130 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.390 19:05:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.649 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.908 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.167 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.426 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.426 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.426 19:05:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:21.426 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:21.685 malloc_lvol_verify 00:07:21.685 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:21.944 5e426aa8-6fca-43b5-a651-eaea00cb7e7f 00:07:21.944 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.202 93d47f95-5134-4f86-8d05-74e78d73daf1 00:07:22.203 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:22.461 /dev/nbd0 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:22.461 mke2fs 1.47.0 (5-Feb-2023) 00:07:22.461 Discarding device blocks: 0/4096 done 00:07:22.461 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:22.461 00:07:22.461 Allocating group tables: 0/1 done 00:07:22.461 Writing inode tables: 0/1 done 00:07:22.461 Creating journal (1024 blocks): done 00:07:22.461 Writing superblocks and filesystem accounting information: 0/1 done 00:07:22.461 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.461 19:05:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61481 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61481 ']' 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61481 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.461 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61481 00:07:22.720 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.720 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.720 killing process with pid 61481 00:07:22.720 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61481' 00:07:22.720 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61481 00:07:22.720 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61481 00:07:23.291 19:05:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:23.291 00:07:23.291 real 0m11.424s 00:07:23.291 user 0m15.971s 00:07:23.291 sys 0m3.812s 00:07:23.291 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.291 19:05:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:23.291 ************************************ 00:07:23.291 END TEST bdev_nbd 00:07:23.291 ************************************ 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:23.291 skipping fio tests on NVMe due to multi-ns failures. 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:23.291 19:05:32 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.291 19:05:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.291 19:05:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.291 19:05:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.291 ************************************ 00:07:23.291 START TEST bdev_verify 00:07:23.291 ************************************ 00:07:23.291 19:05:32 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.291 [2024-11-27 19:05:32.878531] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:23.291 [2024-11-27 19:05:32.878652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61897 ] 00:07:23.576 [2024-11-27 19:05:33.030843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.576 [2024-11-27 19:05:33.120196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.576 [2024-11-27 19:05:33.120223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.152 Running I/O for 5 seconds... 00:07:26.481 23680.00 IOPS, 92.50 MiB/s [2024-11-27T19:05:37.052Z] 23808.00 IOPS, 93.00 MiB/s [2024-11-27T19:05:37.993Z] 23850.67 IOPS, 93.17 MiB/s [2024-11-27T19:05:38.936Z] 24720.00 IOPS, 96.56 MiB/s [2024-11-27T19:05:38.936Z] 23616.00 IOPS, 92.25 MiB/s 00:07:29.301 Latency(us) 00:07:29.301 [2024-11-27T19:05:38.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.301 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0xbd0bd 00:07:29.301 Nvme0n1 : 5.07 1640.04 6.41 0.00 0.00 77890.70 14720.39 103244.41 00:07:29.301 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:29.301 Nvme0n1 : 5.07 1692.66 6.61 0.00 0.00 75371.10 15123.69 78643.20 00:07:29.301 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x4ff80 00:07:29.301 Nvme1n1p1 : 5.07 1639.55 6.40 0.00 0.00 77801.78 15325.34 100018.02 00:07:29.301 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:29.301 Nvme1n1p1 : 5.07 1692.18 6.61 0.00 0.00 75214.09 15627.82 75416.81 00:07:29.301 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x4ff7f 00:07:29.301 Nvme1n1p2 : 5.08 1638.52 6.40 0.00 0.00 77719.25 16434.41 97598.23 00:07:29.301 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:29.301 Nvme1n1p2 : 5.07 1691.63 6.61 0.00 0.00 75047.75 16736.89 73803.62 00:07:29.301 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x80000 00:07:29.301 Nvme2n1 : 5.08 1638.07 6.40 0.00 0.00 77625.16 16636.06 94775.14 00:07:29.301 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x80000 length 0x80000 00:07:29.301 Nvme2n1 : 5.07 1691.13 6.61 0.00 0.00 74888.37 18047.61 71383.83 00:07:29.301 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x80000 00:07:29.301 Nvme2n2 : 5.08 1637.56 6.40 0.00 0.00 77520.83 16636.06 96791.63 00:07:29.301 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x80000 length 0x80000 00:07:29.301 Nvme2n2 : 5.08 1700.63 6.64 0.00 0.00 74368.56 4234.63 72190.42 00:07:29.301 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x80000 00:07:29.301 Nvme2n3 : 5.08 1636.32 6.39 0.00 0.00 77423.22 13913.80 99211.42 00:07:29.301 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x80000 length 0x80000 00:07:29.301 Nvme2n3 : 5.08 1699.34 6.64 0.00 0.00 74322.56 7208.96 76223.41 00:07:29.301 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x0 length 0x20000 00:07:29.301 Nvme3n1 : 5.09 1635.09 6.39 0.00 0.00 77336.98 7813.91 102034.51 00:07:29.301 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.301 Verification LBA range: start 0x20000 length 0x20000 00:07:29.301 Nvme3n1 : 5.09 1698.07 6.63 0.00 0.00 74290.73 9679.16 77030.01 00:07:29.301 [2024-11-27T19:05:38.936Z] =================================================================================================================== 00:07:29.301 [2024-11-27T19:05:38.936Z] Total : 23330.80 91.14 0.00 0.00 76177.06 4234.63 103244.41 00:07:30.244 00:07:30.244 real 0m6.896s 00:07:30.244 user 0m12.886s 00:07:30.244 sys 0m0.222s 00:07:30.244 19:05:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.244 ************************************ 00:07:30.244 END TEST bdev_verify 00:07:30.244 ************************************ 00:07:30.244 19:05:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.244 19:05:39 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.244 19:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.244 19:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.244 19:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.244 ************************************ 00:07:30.244 START TEST bdev_verify_big_io 00:07:30.244 ************************************ 00:07:30.244 19:05:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.244 [2024-11-27 19:05:39.834441] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:30.244 [2024-11-27 19:05:39.834555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61995 ] 00:07:30.505 [2024-11-27 19:05:39.993516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.505 [2024-11-27 19:05:40.105815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.505 [2024-11-27 19:05:40.105943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.446 Running I/O for 5 seconds... 00:07:37.541 1580.00 IOPS, 98.75 MiB/s [2024-11-27T19:05:47.748Z] 2514.50 IOPS, 157.16 MiB/s [2024-11-27T19:05:48.010Z] 3331.33 IOPS, 208.21 MiB/s 00:07:38.375 Latency(us) 00:07:38.375 [2024-11-27T19:05:48.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.375 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0xbd0b 00:07:38.375 Nvme0n1 : 5.84 109.59 6.85 0.00 0.00 1104116.50 17644.31 1290555.08 00:07:38.375 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:38.375 Nvme0n1 : 6.32 48.14 3.01 0.00 0.00 2482869.17 17241.01 2348810.24 00:07:38.375 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x4ff8 00:07:38.375 Nvme1n1p1 : 5.84 97.92 6.12 0.00 0.00 1197740.03 108890.58 1910021.51 00:07:38.375 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:38.375 Nvme1n1p1 : 6.03 63.67 3.98 0.00 0.00 1772016.77 136314.88 1987454.82 00:07:38.375 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x4ff7 00:07:38.375 Nvme1n1p2 : 5.84 110.88 6.93 0.00 0.00 1031372.96 98001.53 1013085.74 00:07:38.375 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:38.375 Nvme1n1p2 : 6.25 71.81 4.49 0.00 0.00 1489397.39 32062.23 1780966.01 00:07:38.375 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x8000 00:07:38.375 Nvme2n1 : 6.04 122.90 7.68 0.00 0.00 915959.83 54041.99 1167952.34 00:07:38.375 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x8000 length 0x8000 00:07:38.375 Nvme2n1 : 6.32 81.53 5.10 0.00 0.00 1244354.30 30852.33 1793871.56 00:07:38.375 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x8000 00:07:38.375 Nvme2n2 : 6.04 127.16 7.95 0.00 0.00 861520.34 54848.59 1200216.22 00:07:38.375 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x8000 length 0x8000 00:07:38.375 Nvme2n2 : 6.41 104.24 6.51 0.00 0.00 937005.11 20164.92 1819682.66 00:07:38.375 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x8000 00:07:38.375 Nvme2n3 : 6.09 130.08 8.13 0.00 0.00 812582.49 45976.02 1232480.10 00:07:38.375 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x8000 length 0x8000 00:07:38.375 Nvme2n3 : 6.72 190.28 11.89 0.00 0.00 487658.41 12351.02 1845493.76 00:07:38.375 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x0 length 0x2000 00:07:38.375 Nvme3n1 : 6.22 149.36 9.33 0.00 0.00 691033.78 1777.03 1264743.98 00:07:38.375 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:38.375 Verification LBA range: start 0x2000 length 0x2000 00:07:38.375 Nvme3n1 : 7.02 346.45 21.65 0.00 0.00 254728.51 437.96 1871304.86 00:07:38.375 [2024-11-27T19:05:48.010Z] =================================================================================================================== 00:07:38.375 [2024-11-27T19:05:48.010Z] Total : 1754.01 109.63 0.00 0.00 836257.86 437.96 2348810.24 00:07:40.292 00:07:40.292 real 0m9.782s 00:07:40.292 user 0m18.567s 00:07:40.292 sys 0m0.264s 00:07:40.292 19:05:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.292 19:05:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:40.292 ************************************ 00:07:40.292 END TEST bdev_verify_big_io 00:07:40.292 ************************************ 00:07:40.292 19:05:49 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.292 19:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.292 19:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.292 19:05:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.292 ************************************ 00:07:40.292 START TEST bdev_write_zeroes 00:07:40.292 ************************************ 00:07:40.292 19:05:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.292 [2024-11-27 19:05:49.660067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:40.292 [2024-11-27 19:05:49.660197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62116 ] 00:07:40.292 [2024-11-27 19:05:49.820852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.553 [2024-11-27 19:05:49.927435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.128 Running I/O for 1 seconds... 00:07:42.063 54592.00 IOPS, 213.25 MiB/s 00:07:42.063 Latency(us) 00:07:42.063 [2024-11-27T19:05:51.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.063 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme0n1 : 1.02 7805.89 30.49 0.00 0.00 16350.95 7662.67 30650.68 00:07:42.063 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme1n1p1 : 1.03 7796.40 30.45 0.00 0.00 16338.04 12250.19 29642.44 00:07:42.063 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme1n1p2 : 1.03 7816.29 30.53 0.00 0.00 16183.88 8973.39 29440.79 00:07:42.063 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme2n1 : 1.03 7807.43 30.50 0.00 0.00 16166.20 9225.45 28835.84 00:07:42.063 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme2n2 : 1.03 7782.01 30.40 0.00 0.00 16204.83 9729.58 29440.79 00:07:42.063 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme2n3 : 1.03 7773.28 30.36 0.00 0.00 16194.37 9326.28 30045.74 00:07:42.063 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.063 Nvme3n1 : 1.03 7702.45 30.09 0.00 0.00 16309.54 9729.58 31860.58 00:07:42.063 [2024-11-27T19:05:51.698Z] =================================================================================================================== 00:07:42.063 [2024-11-27T19:05:51.698Z] Total : 54483.76 212.83 0.00 0.00 16249.45 7662.67 31860.58 00:07:43.004 00:07:43.004 real 0m2.967s 00:07:43.004 user 0m2.575s 00:07:43.004 sys 0m0.270s 00:07:43.004 19:05:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.004 ************************************ 00:07:43.004 END TEST bdev_write_zeroes 00:07:43.004 ************************************ 00:07:43.004 19:05:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:43.004 19:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.004 19:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:43.004 19:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.004 19:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.264 ************************************ 00:07:43.264 START TEST bdev_json_nonenclosed 00:07:43.264 ************************************ 00:07:43.264 19:05:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.264 [2024-11-27 19:05:52.744698] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:43.264 [2024-11-27 19:05:52.744884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:07:43.524 [2024-11-27 19:05:52.916197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.524 [2024-11-27 19:05:53.018651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.524 [2024-11-27 19:05:53.018735] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:43.524 [2024-11-27 19:05:53.018754] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:43.524 [2024-11-27 19:05:53.018764] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.783 00:07:43.784 real 0m0.550s 00:07:43.784 user 0m0.325s 00:07:43.784 sys 0m0.119s 00:07:43.784 19:05:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.784 19:05:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:43.784 ************************************ 00:07:43.784 END TEST bdev_json_nonenclosed 00:07:43.784 ************************************ 00:07:43.784 19:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.784 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:43.784 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.784 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.784 ************************************ 00:07:43.784 START TEST bdev_json_nonarray 00:07:43.784 ************************************ 00:07:43.784 19:05:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.784 [2024-11-27 19:05:53.302786] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:43.784 [2024-11-27 19:05:53.302905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62200 ] 00:07:44.044 [2024-11-27 19:05:53.458743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.044 [2024-11-27 19:05:53.565101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.044 [2024-11-27 19:05:53.565208] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:44.044 [2024-11-27 19:05:53.565228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.044 [2024-11-27 19:05:53.565239] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.304 00:07:44.304 real 0m0.510s 00:07:44.304 user 0m0.308s 00:07:44.304 sys 0m0.098s 00:07:44.304 19:05:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.304 19:05:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:44.304 ************************************ 00:07:44.304 END TEST bdev_json_nonarray 00:07:44.304 ************************************ 00:07:44.304 19:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:44.304 19:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:44.304 19:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:44.304 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.304 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.305 19:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 ************************************ 00:07:44.305 START TEST bdev_gpt_uuid 00:07:44.305 ************************************ 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62220 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62220 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62220 ']' 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.305 19:05:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 [2024-11-27 19:05:53.865067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:44.305 [2024-11-27 19:05:53.865209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62220 ] 00:07:44.566 [2024-11-27 19:05:54.027723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.566 [2024-11-27 19:05:54.134345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.511 19:05:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.511 19:05:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:45.511 19:05:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.511 19:05:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.511 19:05:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 Some configs were skipped because the RPC state that can call them passed over. 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:45.511 { 00:07:45.511 "name": "Nvme1n1p1", 00:07:45.511 "aliases": [ 00:07:45.511 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:45.511 ], 00:07:45.511 "product_name": "GPT Disk", 00:07:45.511 "block_size": 4096, 00:07:45.511 "num_blocks": 655104, 00:07:45.511 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:45.511 "assigned_rate_limits": { 00:07:45.511 "rw_ios_per_sec": 0, 00:07:45.511 "rw_mbytes_per_sec": 0, 00:07:45.511 "r_mbytes_per_sec": 0, 00:07:45.511 "w_mbytes_per_sec": 0 00:07:45.511 }, 00:07:45.511 "claimed": false, 00:07:45.511 "zoned": false, 00:07:45.511 "supported_io_types": { 00:07:45.511 "read": true, 00:07:45.511 "write": true, 00:07:45.511 "unmap": true, 00:07:45.511 "flush": true, 00:07:45.511 "reset": true, 00:07:45.511 "nvme_admin": false, 00:07:45.511 "nvme_io": false, 00:07:45.511 "nvme_io_md": false, 00:07:45.511 "write_zeroes": true, 00:07:45.511 "zcopy": false, 00:07:45.511 "get_zone_info": false, 00:07:45.511 "zone_management": false, 00:07:45.511 "zone_append": false, 00:07:45.511 "compare": true, 00:07:45.511 "compare_and_write": false, 00:07:45.511 "abort": true, 00:07:45.511 "seek_hole": false, 00:07:45.511 "seek_data": false, 00:07:45.511 "copy": true, 00:07:45.511 "nvme_iov_md": false 00:07:45.511 }, 00:07:45.511 "driver_specific": { 00:07:45.511 "gpt": { 00:07:45.511 "base_bdev": "Nvme1n1", 00:07:45.511 "offset_blocks": 256, 00:07:45.511 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:45.511 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:45.511 "partition_name": "SPDK_TEST_first" 00:07:45.511 } 00:07:45.511 } 00:07:45.511 } 00:07:45.511 ]' 00:07:45.511 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:45.772 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:45.773 { 00:07:45.773 "name": "Nvme1n1p2", 00:07:45.773 "aliases": [ 00:07:45.773 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:45.773 ], 00:07:45.773 "product_name": "GPT Disk", 00:07:45.773 "block_size": 4096, 00:07:45.773 "num_blocks": 655103, 00:07:45.773 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.773 "assigned_rate_limits": { 00:07:45.773 "rw_ios_per_sec": 0, 00:07:45.773 "rw_mbytes_per_sec": 0, 00:07:45.773 "r_mbytes_per_sec": 0, 00:07:45.773 "w_mbytes_per_sec": 0 00:07:45.773 }, 00:07:45.773 "claimed": false, 00:07:45.773 "zoned": false, 00:07:45.773 "supported_io_types": { 00:07:45.773 "read": true, 00:07:45.773 "write": true, 00:07:45.773 "unmap": true, 00:07:45.773 "flush": true, 00:07:45.773 "reset": true, 00:07:45.773 "nvme_admin": false, 00:07:45.773 "nvme_io": false, 00:07:45.773 "nvme_io_md": false, 00:07:45.773 "write_zeroes": true, 00:07:45.773 "zcopy": false, 00:07:45.773 "get_zone_info": false, 00:07:45.773 "zone_management": false, 00:07:45.773 "zone_append": false, 00:07:45.773 "compare": true, 00:07:45.773 "compare_and_write": false, 00:07:45.773 "abort": true, 00:07:45.773 "seek_hole": false, 00:07:45.773 "seek_data": false, 00:07:45.773 "copy": true, 00:07:45.773 "nvme_iov_md": false 00:07:45.773 }, 00:07:45.773 "driver_specific": { 00:07:45.773 "gpt": { 00:07:45.773 "base_bdev": "Nvme1n1", 00:07:45.773 "offset_blocks": 655360, 00:07:45.773 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:45.773 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.773 "partition_name": "SPDK_TEST_second" 00:07:45.773 } 00:07:45.773 } 00:07:45.773 } 00:07:45.773 ]' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62220 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62220 ']' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62220 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62220 00:07:45.773 killing process with pid 62220 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62220' 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62220 00:07:45.773 19:05:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62220 00:07:47.696 ************************************ 00:07:47.696 END TEST bdev_gpt_uuid 00:07:47.696 ************************************ 00:07:47.696 00:07:47.696 real 0m3.225s 00:07:47.696 user 0m3.271s 00:07:47.696 sys 0m0.451s 00:07:47.696 19:05:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.696 19:05:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:47.696 19:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.957 Waiting for block devices as requested 00:07:47.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.218 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.218 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.500 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:53.500 19:06:02 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:53.500 19:06:02 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:53.500 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.500 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.500 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:53.500 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:53.500 19:06:03 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:53.500 00:07:53.500 real 0m57.465s 00:07:53.500 user 1m12.941s 00:07:53.500 sys 0m8.591s 00:07:53.500 19:06:03 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.500 19:06:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.500 ************************************ 00:07:53.500 END TEST blockdev_nvme_gpt 00:07:53.500 ************************************ 00:07:53.759 19:06:03 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:53.759 19:06:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.759 19:06:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.759 19:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:53.759 ************************************ 00:07:53.759 START TEST nvme 00:07:53.759 ************************************ 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:53.759 * Looking for test storage... 00:07:53.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.759 19:06:03 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.759 19:06:03 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.759 19:06:03 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.759 19:06:03 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.759 19:06:03 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.759 19:06:03 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:53.759 19:06:03 nvme -- scripts/common.sh@345 -- # : 1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.759 19:06:03 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.759 19:06:03 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@353 -- # local d=1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.759 19:06:03 nvme -- scripts/common.sh@355 -- # echo 1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.759 19:06:03 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@353 -- # local d=2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.759 19:06:03 nvme -- scripts/common.sh@355 -- # echo 2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.759 19:06:03 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.759 19:06:03 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.759 19:06:03 nvme -- scripts/common.sh@368 -- # return 0 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 19:06:03 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 19:06:03 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:54.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.583 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.583 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.841 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.841 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.841 19:06:04 nvme -- nvme/nvme.sh@79 -- # uname 00:07:54.841 19:06:04 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:54.841 19:06:04 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:54.841 19:06:04 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1075 -- # stubpid=62862 00:07:54.841 Waiting for stub to ready for secondary processes... 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62862 ]] 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:54.841 19:06:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:54.841 [2024-11-27 19:06:04.347115] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:54.841 [2024-11-27 19:06:04.347251] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:55.776 [2024-11-27 19:06:05.110485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.777 [2024-11-27 19:06:05.207265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.777 [2024-11-27 19:06:05.207592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.777 [2024-11-27 19:06:05.207615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.777 [2024-11-27 19:06:05.221152] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:55.777 [2024-11-27 19:06:05.221188] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.777 [2024-11-27 19:06:05.234399] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:55.777 [2024-11-27 19:06:05.234511] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:55.777 [2024-11-27 19:06:05.236957] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.777 [2024-11-27 19:06:05.237155] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:55.777 [2024-11-27 19:06:05.237216] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:55.777 [2024-11-27 19:06:05.239715] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.777 [2024-11-27 19:06:05.239878] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:55.777 [2024-11-27 19:06:05.239939] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:55.777 [2024-11-27 19:06:05.241870] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.777 [2024-11-27 19:06:05.242003] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:55.777 [2024-11-27 19:06:05.242057] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:55.777 [2024-11-27 19:06:05.242091] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:55.777 [2024-11-27 19:06:05.242135] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:55.777 19:06:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:55.777 done. 00:07:55.777 19:06:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:55.777 19:06:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.777 19:06:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:55.777 19:06:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.777 19:06:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.777 ************************************ 00:07:55.777 START TEST nvme_reset 00:07:55.777 ************************************ 00:07:55.777 19:06:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:56.035 Initializing NVMe Controllers 00:07:56.035 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:56.035 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:56.035 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:56.035 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:56.035 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:56.035 00:07:56.035 real 0m0.217s 00:07:56.035 user 0m0.074s 00:07:56.035 sys 0m0.100s 00:07:56.035 19:06:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.035 19:06:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:56.035 ************************************ 00:07:56.035 END TEST nvme_reset 00:07:56.035 ************************************ 00:07:56.035 19:06:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:56.035 19:06:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.035 19:06:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.035 19:06:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.035 ************************************ 00:07:56.035 START TEST nvme_identify 00:07:56.035 ************************************ 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:56.035 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:56.035 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:56.035 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.035 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:56.035 19:06:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:56.035 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:56.297 [2024-11-27 19:06:05.804681] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62883 terminated unexpected 00:07:56.297 ===================================================== 00:07:56.297 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.297 ===================================================== 00:07:56.297 Controller Capabilities/Features 00:07:56.297 ================================ 00:07:56.297 Vendor ID: 1b36 00:07:56.297 Subsystem Vendor ID: 1af4 00:07:56.297 Serial Number: 12340 00:07:56.297 Model Number: QEMU NVMe Ctrl 00:07:56.297 Firmware Version: 8.0.0 00:07:56.297 Recommended Arb Burst: 6 00:07:56.297 IEEE OUI Identifier: 00 54 52 00:07:56.297 Multi-path I/O 00:07:56.297 May have multiple subsystem ports: No 00:07:56.297 May have multiple controllers: No 00:07:56.297 Associated with SR-IOV VF: No 00:07:56.297 Max Data Transfer Size: 524288 00:07:56.297 Max Number of Namespaces: 256 00:07:56.297 Max Number of I/O Queues: 64 00:07:56.297 NVMe Specification Version (VS): 1.4 00:07:56.297 NVMe Specification Version (Identify): 1.4 00:07:56.297 Maximum Queue Entries: 2048 00:07:56.297 Contiguous Queues Required: Yes 00:07:56.297 Arbitration Mechanisms Supported 00:07:56.297 Weighted Round Robin: Not Supported 00:07:56.297 Vendor Specific: Not Supported 00:07:56.297 Reset Timeout: 7500 ms 00:07:56.297 Doorbell Stride: 4 bytes 00:07:56.297 NVM Subsystem Reset: Not Supported 00:07:56.297 Command Sets Supported 00:07:56.297 NVM Command Set: Supported 00:07:56.297 Boot Partition: Not Supported 00:07:56.297 Memory Page Size Minimum: 4096 bytes 00:07:56.297 Memory Page Size Maximum: 65536 bytes 00:07:56.297 Persistent Memory Region: Not Supported 00:07:56.297 Optional Asynchronous Events Supported 00:07:56.297 Namespace Attribute Notices: Supported 00:07:56.297 Firmware Activation Notices: Not Supported 00:07:56.297 ANA Change Notices: Not Supported 00:07:56.297 PLE Aggregate Log Change Notices: Not Supported 00:07:56.297 LBA Status Info Alert Notices: Not Supported 00:07:56.297 EGE Aggregate Log Change Notices: Not Supported 00:07:56.297 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.297 Zone Descriptor Change Notices: Not Supported 00:07:56.297 Discovery Log Change Notices: Not Supported 00:07:56.297 Controller Attributes 00:07:56.297 128-bit Host Identifier: Not Supported 00:07:56.297 Non-Operational Permissive Mode: Not Supported 00:07:56.297 NVM Sets: Not Supported 00:07:56.297 Read Recovery Levels: Not Supported 00:07:56.297 Endurance Groups: Not Supported 00:07:56.297 Predictable Latency Mode: Not Supported 00:07:56.297 Traffic Based Keep ALive: Not Supported 00:07:56.297 Namespace Granularity: Not Supported 00:07:56.297 SQ Associations: Not Supported 00:07:56.297 UUID List: Not Supported 00:07:56.297 Multi-Domain Subsystem: Not Supported 00:07:56.297 Fixed Capacity Management: Not Supported 00:07:56.297 Variable Capacity Management: Not Supported 00:07:56.297 Delete Endurance Group: Not Supported 00:07:56.297 Delete NVM Set: Not Supported 00:07:56.297 Extended LBA Formats Supported: Supported 00:07:56.297 Flexible Data Placement Supported: Not Supported 00:07:56.297 00:07:56.297 Controller Memory Buffer Support 00:07:56.297 ================================ 00:07:56.297 Supported: No 00:07:56.297 00:07:56.297 Persistent Memory Region Support 00:07:56.297 ================================ 00:07:56.297 Supported: No 00:07:56.297 00:07:56.297 Admin Command Set Attributes 00:07:56.297 ============================ 00:07:56.297 Security Send/Receive: Not Supported 00:07:56.297 Format NVM: Supported 00:07:56.297 Firmware Activate/Download: Not Supported 00:07:56.297 Namespace Management: Supported 00:07:56.297 Device Self-Test: Not Supported 00:07:56.297 Directives: Supported 00:07:56.297 NVMe-MI: Not Supported 00:07:56.297 Virtualization Management: Not Supported 00:07:56.297 Doorbell Buffer Config: Supported 00:07:56.297 Get LBA Status Capability: Not Supported 00:07:56.297 Command & Feature Lockdown Capability: Not Supported 00:07:56.297 Abort Command Limit: 4 00:07:56.297 Async Event Request Limit: 4 00:07:56.297 Number of Firmware Slots: N/A 00:07:56.297 Firmware Slot 1 Read-Only: N/A 00:07:56.297 Firmware Activation Without Reset: N/A 00:07:56.297 Multiple Update Detection Support: N/A 00:07:56.297 Firmware Update Granularity: No Information Provided 00:07:56.297 Per-Namespace SMART Log: Yes 00:07:56.297 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.297 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.297 Command Effects Log Page: Supported 00:07:56.297 Get Log Page Extended Data: Supported 00:07:56.297 Telemetry Log Pages: Not Supported 00:07:56.297 Persistent Event Log Pages: Not Supported 00:07:56.297 Supported Log Pages Log Page: May Support 00:07:56.297 Commands Supported & Effects Log Page: Not Supported 00:07:56.297 Feature Identifiers & Effects Log Page:May Support 00:07:56.297 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.297 Data Area 4 for Telemetry Log: Not Supported 00:07:56.297 Error Log Page Entries Supported: 1 00:07:56.297 Keep Alive: Not Supported 00:07:56.297 00:07:56.297 NVM Command Set Attributes 00:07:56.297 ========================== 00:07:56.297 Submission Queue Entry Size 00:07:56.297 Max: 64 00:07:56.297 Min: 64 00:07:56.297 Completion Queue Entry Size 00:07:56.297 Max: 16 00:07:56.297 Min: 16 00:07:56.297 Number of Namespaces: 256 00:07:56.297 Compare Command: Supported 00:07:56.297 Write Uncorrectable Command: Not Supported 00:07:56.297 Dataset Management Command: Supported 00:07:56.297 Write Zeroes Command: Supported 00:07:56.297 Set Features Save Field: Supported 00:07:56.297 Reservations: Not Supported 00:07:56.297 Timestamp: Supported 00:07:56.297 Copy: Supported 00:07:56.297 Volatile Write Cache: Present 00:07:56.297 Atomic Write Unit (Normal): 1 00:07:56.297 Atomic Write Unit (PFail): 1 00:07:56.297 Atomic Compare & Write Unit: 1 00:07:56.297 Fused Compare & Write: Not Supported 00:07:56.297 Scatter-Gather List 00:07:56.297 SGL Command Set: Supported 00:07:56.297 SGL Keyed: Not Supported 00:07:56.297 SGL Bit Bucket Descriptor: Not Supported 00:07:56.297 SGL Metadata Pointer: Not Supported 00:07:56.297 Oversized SGL: Not Supported 00:07:56.297 SGL Metadata Address: Not Supported 00:07:56.297 SGL Offset: Not Supported 00:07:56.297 Transport SGL Data Block: Not Supported 00:07:56.297 Replay Protected Memory Block: Not Supported 00:07:56.297 00:07:56.297 Firmware Slot Information 00:07:56.297 ========================= 00:07:56.297 Active slot: 1 00:07:56.297 Slot 1 Firmware Revision: 1.0 00:07:56.297 00:07:56.297 00:07:56.297 Commands Supported and Effects 00:07:56.297 ============================== 00:07:56.297 Admin Commands 00:07:56.297 -------------- 00:07:56.297 Delete I/O Submission Queue (00h): Supported 00:07:56.297 Create I/O Submission Queue (01h): Supported 00:07:56.297 Get Log Page (02h): Supported 00:07:56.297 Delete I/O Completion Queue (04h): Supported 00:07:56.297 Create I/O Completion Queue (05h): Supported 00:07:56.297 Identify (06h): Supported 00:07:56.297 Abort (08h): Supported 00:07:56.297 Set Features (09h): Supported 00:07:56.297 Get Features (0Ah): Supported 00:07:56.297 Asynchronous Event Request (0Ch): Supported 00:07:56.297 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.297 Directive Send (19h): Supported 00:07:56.297 Directive Receive (1Ah): Supported 00:07:56.297 Virtualization Management (1Ch): Supported 00:07:56.297 Doorbell Buffer Config (7Ch): Supported 00:07:56.297 Format NVM (80h): Supported LBA-Change 00:07:56.297 I/O Commands 00:07:56.297 ------------ 00:07:56.297 Flush (00h): Supported LBA-Change 00:07:56.297 Write (01h): Supported LBA-Change 00:07:56.297 Read (02h): Supported 00:07:56.297 Compare (05h): Supported 00:07:56.297 Write Zeroes (08h): Supported LBA-Change 00:07:56.297 Dataset Management (09h): Supported LBA-Change 00:07:56.297 Unknown (0Ch): Supported 00:07:56.297 Unknown (12h): Supported 00:07:56.298 Copy (19h): Supported LBA-Change 00:07:56.298 Unknown (1Dh): Supported LBA-Change 00:07:56.298 00:07:56.298 Error Log 00:07:56.298 ========= 00:07:56.298 00:07:56.298 Arbitration 00:07:56.298 =========== 00:07:56.298 Arbitration Burst: no limit 00:07:56.298 00:07:56.298 Power Management 00:07:56.298 ================ 00:07:56.298 Number of Power States: 1 00:07:56.298 Current Power State: Power State #0 00:07:56.298 Power State #0: 00:07:56.298 Max Power: 25.00 W 00:07:56.298 Non-Operational State: Operational 00:07:56.298 Entry Latency: 16 microseconds 00:07:56.298 Exit Latency: 4 microseconds 00:07:56.298 Relative Read Throughput: 0 00:07:56.298 Relative Read Latency: 0 00:07:56.298 Relative Write Throughput: 0 00:07:56.298 Relative Write Latency: 0 00:07:56.298 Idle Power[2024-11-27 19:06:05.805917] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62883 terminated unexpected 00:07:56.298 : Not Reported 00:07:56.298 Active Power: Not Reported 00:07:56.298 Non-Operational Permissive Mode: Not Supported 00:07:56.298 00:07:56.298 Health Information 00:07:56.298 ================== 00:07:56.298 Critical Warnings: 00:07:56.298 Available Spare Space: OK 00:07:56.298 Temperature: OK 00:07:56.298 Device Reliability: OK 00:07:56.298 Read Only: No 00:07:56.298 Volatile Memory Backup: OK 00:07:56.298 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.298 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.298 Available Spare: 0% 00:07:56.298 Available Spare Threshold: 0% 00:07:56.298 Life Percentage Used: 0% 00:07:56.298 Data Units Read: 641 00:07:56.298 Data Units Written: 569 00:07:56.298 Host Read Commands: 35926 00:07:56.298 Host Write Commands: 35712 00:07:56.298 Controller Busy Time: 0 minutes 00:07:56.298 Power Cycles: 0 00:07:56.298 Power On Hours: 0 hours 00:07:56.298 Unsafe Shutdowns: 0 00:07:56.298 Unrecoverable Media Errors: 0 00:07:56.298 Lifetime Error Log Entries: 0 00:07:56.298 Warning Temperature Time: 0 minutes 00:07:56.298 Critical Temperature Time: 0 minutes 00:07:56.298 00:07:56.298 Number of Queues 00:07:56.298 ================ 00:07:56.298 Number of I/O Submission Queues: 64 00:07:56.298 Number of I/O Completion Queues: 64 00:07:56.298 00:07:56.298 ZNS Specific Controller Data 00:07:56.298 ============================ 00:07:56.298 Zone Append Size Limit: 0 00:07:56.298 00:07:56.298 00:07:56.298 Active Namespaces 00:07:56.298 ================= 00:07:56.298 Namespace ID:1 00:07:56.298 Error Recovery Timeout: Unlimited 00:07:56.298 Command Set Identifier: NVM (00h) 00:07:56.298 Deallocate: Supported 00:07:56.298 Deallocated/Unwritten Error: Supported 00:07:56.298 Deallocated Read Value: All 0x00 00:07:56.298 Deallocate in Write Zeroes: Not Supported 00:07:56.298 Deallocated Guard Field: 0xFFFF 00:07:56.298 Flush: Supported 00:07:56.298 Reservation: Not Supported 00:07:56.298 Metadata Transferred as: Separate Metadata Buffer 00:07:56.298 Namespace Sharing Capabilities: Private 00:07:56.298 Size (in LBAs): 1548666 (5GiB) 00:07:56.298 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.298 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.298 Thin Provisioning: Not Supported 00:07:56.298 Per-NS Atomic Units: No 00:07:56.298 Maximum Single Source Range Length: 128 00:07:56.298 Maximum Copy Length: 128 00:07:56.298 Maximum Source Range Count: 128 00:07:56.298 NGUID/EUI64 Never Reused: No 00:07:56.298 Namespace Write Protected: No 00:07:56.298 Number of LBA Formats: 8 00:07:56.298 Current LBA Format: LBA Format #07 00:07:56.298 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.298 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.298 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.298 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.298 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.298 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.298 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.298 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.298 00:07:56.298 NVM Specific Namespace Data 00:07:56.298 =========================== 00:07:56.298 Logical Block Storage Tag Mask: 0 00:07:56.298 Protection Information Capabilities: 00:07:56.298 16b Guard Protection Information Storage Tag Support: No 00:07:56.298 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.298 Storage Tag Check Read Support: No 00:07:56.298 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.298 ===================================================== 00:07:56.298 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.298 ===================================================== 00:07:56.298 Controller Capabilities/Features 00:07:56.298 ================================ 00:07:56.298 Vendor ID: 1b36 00:07:56.298 Subsystem Vendor ID: 1af4 00:07:56.298 Serial Number: 12341 00:07:56.298 Model Number: QEMU NVMe Ctrl 00:07:56.298 Firmware Version: 8.0.0 00:07:56.298 Recommended Arb Burst: 6 00:07:56.298 IEEE OUI Identifier: 00 54 52 00:07:56.298 Multi-path I/O 00:07:56.298 May have multiple subsystem ports: No 00:07:56.298 May have multiple controllers: No 00:07:56.298 Associated with SR-IOV VF: No 00:07:56.298 Max Data Transfer Size: 524288 00:07:56.298 Max Number of Namespaces: 256 00:07:56.298 Max Number of I/O Queues: 64 00:07:56.298 NVMe Specification Version (VS): 1.4 00:07:56.298 NVMe Specification Version (Identify): 1.4 00:07:56.298 Maximum Queue Entries: 2048 00:07:56.298 Contiguous Queues Required: Yes 00:07:56.298 Arbitration Mechanisms Supported 00:07:56.298 Weighted Round Robin: Not Supported 00:07:56.298 Vendor Specific: Not Supported 00:07:56.298 Reset Timeout: 7500 ms 00:07:56.298 Doorbell Stride: 4 bytes 00:07:56.298 NVM Subsystem Reset: Not Supported 00:07:56.298 Command Sets Supported 00:07:56.298 NVM Command Set: Supported 00:07:56.298 Boot Partition: Not Supported 00:07:56.298 Memory Page Size Minimum: 4096 bytes 00:07:56.298 Memory Page Size Maximum: 65536 bytes 00:07:56.298 Persistent Memory Region: Not Supported 00:07:56.298 Optional Asynchronous Events Supported 00:07:56.298 Namespace Attribute Notices: Supported 00:07:56.298 Firmware Activation Notices: Not Supported 00:07:56.298 ANA Change Notices: Not Supported 00:07:56.298 PLE Aggregate Log Change Notices: Not Supported 00:07:56.298 LBA Status Info Alert Notices: Not Supported 00:07:56.298 EGE Aggregate Log Change Notices: Not Supported 00:07:56.298 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.298 Zone Descriptor Change Notices: Not Supported 00:07:56.298 Discovery Log Change Notices: Not Supported 00:07:56.298 Controller Attributes 00:07:56.298 128-bit Host Identifier: Not Supported 00:07:56.298 Non-Operational Permissive Mode: Not Supported 00:07:56.298 NVM Sets: Not Supported 00:07:56.298 Read Recovery Levels: Not Supported 00:07:56.298 Endurance Groups: Not Supported 00:07:56.298 Predictable Latency Mode: Not Supported 00:07:56.298 Traffic Based Keep ALive: Not Supported 00:07:56.298 Namespace Granularity: Not Supported 00:07:56.298 SQ Associations: Not Supported 00:07:56.298 UUID List: Not Supported 00:07:56.298 Multi-Domain Subsystem: Not Supported 00:07:56.298 Fixed Capacity Management: Not Supported 00:07:56.298 Variable Capacity Management: Not Supported 00:07:56.298 Delete Endurance Group: Not Supported 00:07:56.298 Delete NVM Set: Not Supported 00:07:56.298 Extended LBA Formats Supported: Supported 00:07:56.298 Flexible Data Placement Supported: Not Supported 00:07:56.298 00:07:56.298 Controller Memory Buffer Support 00:07:56.298 ================================ 00:07:56.298 Supported: No 00:07:56.298 00:07:56.298 Persistent Memory Region Support 00:07:56.298 ================================ 00:07:56.298 Supported: No 00:07:56.298 00:07:56.298 Admin Command Set Attributes 00:07:56.298 ============================ 00:07:56.298 Security Send/Receive: Not Supported 00:07:56.298 Format NVM: Supported 00:07:56.298 Firmware Activate/Download: Not Supported 00:07:56.298 Namespace Management: Supported 00:07:56.298 Device Self-Test: Not Supported 00:07:56.298 Directives: Supported 00:07:56.298 NVMe-MI: Not Supported 00:07:56.299 Virtualization Management: Not Supported 00:07:56.299 Doorbell Buffer Config: Supported 00:07:56.299 Get LBA Status Capability: Not Supported 00:07:56.299 Command & Feature Lockdown Capability: Not Supported 00:07:56.299 Abort Command Limit: 4 00:07:56.299 Async Event Request Limit: 4 00:07:56.299 Number of Firmware Slots: N/A 00:07:56.299 Firmware Slot 1 Read-Only: N/A 00:07:56.299 Firmware Activation Without Reset: N/A 00:07:56.299 Multiple Update Detection Support: N/A 00:07:56.299 Firmware Update Granularity: No Information Provided 00:07:56.299 Per-Namespace SMART Log: Yes 00:07:56.299 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.299 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.299 Command Effects Log Page: Supported 00:07:56.299 Get Log Page Extended Data: Supported 00:07:56.299 Telemetry Log Pages: Not Supported 00:07:56.299 Persistent Event Log Pages: Not Supported 00:07:56.299 Supported Log Pages Log Page: May Support 00:07:56.299 Commands Supported & Effects Log Page: Not Supported 00:07:56.299 Feature Identifiers & Effects Log Page:May Support 00:07:56.299 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.299 Data Area 4 for Telemetry Log: Not Supported 00:07:56.299 Error Log Page Entries Supported: 1 00:07:56.299 Keep Alive: Not Supported 00:07:56.299 00:07:56.299 NVM Command Set Attributes 00:07:56.299 ========================== 00:07:56.299 Submission Queue Entry Size 00:07:56.299 Max: 64 00:07:56.299 Min: 64 00:07:56.299 Completion Queue Entry Size 00:07:56.299 Max: 16 00:07:56.299 Min: 16 00:07:56.299 Number of Namespaces: 256 00:07:56.299 Compare Command: Supported 00:07:56.299 Write Uncorrectable Command: Not Supported 00:07:56.299 Dataset Management Command: Supported 00:07:56.299 Write Zeroes Command: Supported 00:07:56.299 Set Features Save Field: Supported 00:07:56.299 Reservations: Not Supported 00:07:56.299 Timestamp: Supported 00:07:56.299 Copy: Supported 00:07:56.299 Volatile Write Cache: Present 00:07:56.299 Atomic Write Unit (Normal): 1 00:07:56.299 Atomic Write Unit (PFail): 1 00:07:56.299 Atomic Compare & Write Unit: 1 00:07:56.299 Fused Compare & Write: Not Supported 00:07:56.299 Scatter-Gather List 00:07:56.299 SGL Command Set: Supported 00:07:56.299 SGL Keyed: Not Supported 00:07:56.299 SGL Bit Bucket Descriptor: Not Supported 00:07:56.299 SGL Metadata Pointer: Not Supported 00:07:56.299 Oversized SGL: Not Supported 00:07:56.299 SGL Metadata Address: Not Supported 00:07:56.299 SGL Offset: Not Supported 00:07:56.299 Transport SGL Data Block: Not Supported 00:07:56.299 Replay Protected Memory Block: Not Supported 00:07:56.299 00:07:56.299 Firmware Slot Information 00:07:56.299 ========================= 00:07:56.299 Active slot: 1 00:07:56.299 Slot 1 Firmware Revision: 1.0 00:07:56.299 00:07:56.299 00:07:56.299 Commands Supported and Effects 00:07:56.299 ============================== 00:07:56.299 Admin Commands 00:07:56.299 -------------- 00:07:56.299 Delete I/O Submission Queue (00h): Supported 00:07:56.299 Create I/O Submission Queue (01h): Supported 00:07:56.299 Get Log Page (02h): Supported 00:07:56.299 Delete I/O Completion Queue (04h): Supported 00:07:56.299 Create I/O Completion Queue (05h): Supported 00:07:56.299 Identify (06h): Supported 00:07:56.299 Abort (08h): Supported 00:07:56.299 Set Features (09h): Supported 00:07:56.299 Get Features (0Ah): Supported 00:07:56.299 Asynchronous Event Request (0Ch): Supported 00:07:56.299 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.299 Directive Send (19h): Supported 00:07:56.299 Directive Receive (1Ah): Supported 00:07:56.299 Virtualization Management (1Ch): Supported 00:07:56.299 Doorbell Buffer Config (7Ch): Supported 00:07:56.299 Format NVM (80h): Supported LBA-Change 00:07:56.299 I/O Commands 00:07:56.299 ------------ 00:07:56.299 Flush (00h): Supported LBA-Change 00:07:56.299 Write (01h): Supported LBA-Change 00:07:56.299 Read (02h): Supported 00:07:56.299 Compare (05h): Supported 00:07:56.299 Write Zeroes (08h): Supported LBA-Change 00:07:56.299 Dataset Management (09h): Supported LBA-Change 00:07:56.299 Unknown (0Ch): Supported 00:07:56.299 Unknown (12h): Supported 00:07:56.299 Copy (19h): Supported LBA-Change 00:07:56.299 Unknown (1Dh): Supported LBA-Change 00:07:56.299 00:07:56.299 Error Log 00:07:56.299 ========= 00:07:56.299 00:07:56.299 Arbitration 00:07:56.299 =========== 00:07:56.299 Arbitration Burst: no limit 00:07:56.299 00:07:56.299 Power Management 00:07:56.299 ================ 00:07:56.299 Number of Power States: 1 00:07:56.299 Current Power State: Power State #0 00:07:56.299 Power State #0: 00:07:56.299 Max Power: 25.00 W 00:07:56.299 Non-Operational State: Operational 00:07:56.299 Entry Latency: 16 microseconds 00:07:56.299 Exit Latency: 4 microseconds 00:07:56.299 Relative Read Throughput: 0 00:07:56.299 Relative Read Latency: 0 00:07:56.299 Relative Write Throughput: 0 00:07:56.299 Relative Write Latency: 0 00:07:56.299 Idle Power: Not Reported 00:07:56.299 Active Power: Not Reported 00:07:56.299 Non-Operational Permissive Mode: Not Supported 00:07:56.299 00:07:56.299 Health Information 00:07:56.299 ================== 00:07:56.299 Critical Warnings: 00:07:56.299 Available Spare Space: OK 00:07:56.299 Temperature: [2024-11-27 19:06:05.806652] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62883 terminated unexpected 00:07:56.299 OK 00:07:56.299 Device Reliability: OK 00:07:56.299 Read Only: No 00:07:56.299 Volatile Memory Backup: OK 00:07:56.299 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.299 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.299 Available Spare: 0% 00:07:56.299 Available Spare Threshold: 0% 00:07:56.299 Life Percentage Used: 0% 00:07:56.299 Data Units Read: 985 00:07:56.299 Data Units Written: 852 00:07:56.299 Host Read Commands: 54980 00:07:56.299 Host Write Commands: 53769 00:07:56.299 Controller Busy Time: 0 minutes 00:07:56.299 Power Cycles: 0 00:07:56.299 Power On Hours: 0 hours 00:07:56.299 Unsafe Shutdowns: 0 00:07:56.299 Unrecoverable Media Errors: 0 00:07:56.299 Lifetime Error Log Entries: 0 00:07:56.299 Warning Temperature Time: 0 minutes 00:07:56.299 Critical Temperature Time: 0 minutes 00:07:56.299 00:07:56.299 Number of Queues 00:07:56.299 ================ 00:07:56.299 Number of I/O Submission Queues: 64 00:07:56.299 Number of I/O Completion Queues: 64 00:07:56.299 00:07:56.299 ZNS Specific Controller Data 00:07:56.299 ============================ 00:07:56.299 Zone Append Size Limit: 0 00:07:56.299 00:07:56.299 00:07:56.299 Active Namespaces 00:07:56.299 ================= 00:07:56.299 Namespace ID:1 00:07:56.299 Error Recovery Timeout: Unlimited 00:07:56.299 Command Set Identifier: NVM (00h) 00:07:56.299 Deallocate: Supported 00:07:56.299 Deallocated/Unwritten Error: Supported 00:07:56.299 Deallocated Read Value: All 0x00 00:07:56.299 Deallocate in Write Zeroes: Not Supported 00:07:56.299 Deallocated Guard Field: 0xFFFF 00:07:56.299 Flush: Supported 00:07:56.299 Reservation: Not Supported 00:07:56.299 Namespace Sharing Capabilities: Private 00:07:56.299 Size (in LBAs): 1310720 (5GiB) 00:07:56.299 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.299 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.299 Thin Provisioning: Not Supported 00:07:56.299 Per-NS Atomic Units: No 00:07:56.299 Maximum Single Source Range Length: 128 00:07:56.299 Maximum Copy Length: 128 00:07:56.299 Maximum Source Range Count: 128 00:07:56.299 NGUID/EUI64 Never Reused: No 00:07:56.299 Namespace Write Protected: No 00:07:56.299 Number of LBA Formats: 8 00:07:56.299 Current LBA Format: LBA Format #04 00:07:56.299 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.299 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.299 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.299 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.299 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.299 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.299 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.299 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.299 00:07:56.299 NVM Specific Namespace Data 00:07:56.299 =========================== 00:07:56.299 Logical Block Storage Tag Mask: 0 00:07:56.299 Protection Information Capabilities: 00:07:56.299 16b Guard Protection Information Storage Tag Support: No 00:07:56.299 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.299 Storage Tag Check Read Support: No 00:07:56.299 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.299 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.300 ===================================================== 00:07:56.300 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.300 ===================================================== 00:07:56.300 Controller Capabilities/Features 00:07:56.300 ================================ 00:07:56.300 Vendor ID: 1b36 00:07:56.300 Subsystem Vendor ID: 1af4 00:07:56.300 Serial Number: 12343 00:07:56.300 Model Number: QEMU NVMe Ctrl 00:07:56.300 Firmware Version: 8.0.0 00:07:56.300 Recommended Arb Burst: 6 00:07:56.300 IEEE OUI Identifier: 00 54 52 00:07:56.300 Multi-path I/O 00:07:56.300 May have multiple subsystem ports: No 00:07:56.300 May have multiple controllers: Yes 00:07:56.300 Associated with SR-IOV VF: No 00:07:56.300 Max Data Transfer Size: 524288 00:07:56.300 Max Number of Namespaces: 256 00:07:56.300 Max Number of I/O Queues: 64 00:07:56.300 NVMe Specification Version (VS): 1.4 00:07:56.300 NVMe Specification Version (Identify): 1.4 00:07:56.300 Maximum Queue Entries: 2048 00:07:56.300 Contiguous Queues Required: Yes 00:07:56.300 Arbitration Mechanisms Supported 00:07:56.300 Weighted Round Robin: Not Supported 00:07:56.300 Vendor Specific: Not Supported 00:07:56.300 Reset Timeout: 7500 ms 00:07:56.300 Doorbell Stride: 4 bytes 00:07:56.300 NVM Subsystem Reset: Not Supported 00:07:56.300 Command Sets Supported 00:07:56.300 NVM Command Set: Supported 00:07:56.300 Boot Partition: Not Supported 00:07:56.300 Memory Page Size Minimum: 4096 bytes 00:07:56.300 Memory Page Size Maximum: 65536 bytes 00:07:56.300 Persistent Memory Region: Not Supported 00:07:56.300 Optional Asynchronous Events Supported 00:07:56.300 Namespace Attribute Notices: Supported 00:07:56.300 Firmware Activation Notices: Not Supported 00:07:56.300 ANA Change Notices: Not Supported 00:07:56.300 PLE Aggregate Log Change Notices: Not Supported 00:07:56.300 LBA Status Info Alert Notices: Not Supported 00:07:56.300 EGE Aggregate Log Change Notices: Not Supported 00:07:56.300 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.300 Zone Descriptor Change Notices: Not Supported 00:07:56.300 Discovery Log Change Notices: Not Supported 00:07:56.300 Controller Attributes 00:07:56.300 128-bit Host Identifier: Not Supported 00:07:56.300 Non-Operational Permissive Mode: Not Supported 00:07:56.300 NVM Sets: Not Supported 00:07:56.300 Read Recovery Levels: Not Supported 00:07:56.300 Endurance Groups: Supported 00:07:56.300 Predictable Latency Mode: Not Supported 00:07:56.300 Traffic Based Keep ALive: Not Supported 00:07:56.300 Namespace Granularity: Not Supported 00:07:56.300 SQ Associations: Not Supported 00:07:56.300 UUID List: Not Supported 00:07:56.300 Multi-Domain Subsystem: Not Supported 00:07:56.300 Fixed Capacity Management: Not Supported 00:07:56.300 Variable Capacity Management: Not Supported 00:07:56.300 Delete Endurance Group: Not Supported 00:07:56.300 Delete NVM Set: Not Supported 00:07:56.300 Extended LBA Formats Supported: Supported 00:07:56.300 Flexible Data Placement Supported: Supported 00:07:56.300 00:07:56.300 Controller Memory Buffer Support 00:07:56.300 ================================ 00:07:56.300 Supported: No 00:07:56.300 00:07:56.300 Persistent Memory Region Support 00:07:56.300 ================================ 00:07:56.300 Supported: No 00:07:56.300 00:07:56.300 Admin Command Set Attributes 00:07:56.300 ============================ 00:07:56.300 Security Send/Receive: Not Supported 00:07:56.300 Format NVM: Supported 00:07:56.300 Firmware Activate/Download: Not Supported 00:07:56.300 Namespace Management: Supported 00:07:56.300 Device Self-Test: Not Supported 00:07:56.300 Directives: Supported 00:07:56.300 NVMe-MI: Not Supported 00:07:56.300 Virtualization Management: Not Supported 00:07:56.300 Doorbell Buffer Config: Supported 00:07:56.300 Get LBA Status Capability: Not Supported 00:07:56.300 Command & Feature Lockdown Capability: Not Supported 00:07:56.300 Abort Command Limit: 4 00:07:56.300 Async Event Request Limit: 4 00:07:56.300 Number of Firmware Slots: N/A 00:07:56.300 Firmware Slot 1 Read-Only: N/A 00:07:56.300 Firmware Activation Without Reset: N/A 00:07:56.300 Multiple Update Detection Support: N/A 00:07:56.300 Firmware Update Granularity: No Information Provided 00:07:56.300 Per-Namespace SMART Log: Yes 00:07:56.300 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.300 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.300 Command Effects Log Page: Supported 00:07:56.300 Get Log Page Extended Data: Supported 00:07:56.300 Telemetry Log Pages: Not Supported 00:07:56.300 Persistent Event Log Pages: Not Supported 00:07:56.300 Supported Log Pages Log Page: May Support 00:07:56.300 Commands Supported & Effects Log Page: Not Supported 00:07:56.300 Feature Identifiers & Effects Log Page:May Support 00:07:56.300 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.300 Data Area 4 for Telemetry Log: Not Supported 00:07:56.300 Error Log Page Entries Supported: 1 00:07:56.300 Keep Alive: Not Supported 00:07:56.300 00:07:56.300 NVM Command Set Attributes 00:07:56.300 ========================== 00:07:56.300 Submission Queue Entry Size 00:07:56.300 Max: 64 00:07:56.300 Min: 64 00:07:56.300 Completion Queue Entry Size 00:07:56.300 Max: 16 00:07:56.300 Min: 16 00:07:56.300 Number of Namespaces: 256 00:07:56.300 Compare Command: Supported 00:07:56.300 Write Uncorrectable Command: Not Supported 00:07:56.300 Dataset Management Command: Supported 00:07:56.300 Write Zeroes Command: Supported 00:07:56.300 Set Features Save Field: Supported 00:07:56.300 Reservations: Not Supported 00:07:56.300 Timestamp: Supported 00:07:56.300 Copy: Supported 00:07:56.300 Volatile Write Cache: Present 00:07:56.300 Atomic Write Unit (Normal): 1 00:07:56.300 Atomic Write Unit (PFail): 1 00:07:56.300 Atomic Compare & Write Unit: 1 00:07:56.300 Fused Compare & Write: Not Supported 00:07:56.300 Scatter-Gather List 00:07:56.300 SGL Command Set: Supported 00:07:56.300 SGL Keyed: Not Supported 00:07:56.300 SGL Bit Bucket Descriptor: Not Supported 00:07:56.300 SGL Metadata Pointer: Not Supported 00:07:56.300 Oversized SGL: Not Supported 00:07:56.300 SGL Metadata Address: Not Supported 00:07:56.300 SGL Offset: Not Supported 00:07:56.300 Transport SGL Data Block: Not Supported 00:07:56.300 Replay Protected Memory Block: Not Supported 00:07:56.300 00:07:56.300 Firmware Slot Information 00:07:56.300 ========================= 00:07:56.300 Active slot: 1 00:07:56.300 Slot 1 Firmware Revision: 1.0 00:07:56.300 00:07:56.300 00:07:56.300 Commands Supported and Effects 00:07:56.300 ============================== 00:07:56.300 Admin Commands 00:07:56.300 -------------- 00:07:56.300 Delete I/O Submission Queue (00h): Supported 00:07:56.300 Create I/O Submission Queue (01h): Supported 00:07:56.300 Get Log Page (02h): Supported 00:07:56.300 Delete I/O Completion Queue (04h): Supported 00:07:56.300 Create I/O Completion Queue (05h): Supported 00:07:56.300 Identify (06h): Supported 00:07:56.300 Abort (08h): Supported 00:07:56.300 Set Features (09h): Supported 00:07:56.300 Get Features (0Ah): Supported 00:07:56.300 Asynchronous Event Request (0Ch): Supported 00:07:56.300 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.300 Directive Send (19h): Supported 00:07:56.300 Directive Receive (1Ah): Supported 00:07:56.300 Virtualization Management (1Ch): Supported 00:07:56.300 Doorbell Buffer Config (7Ch): Supported 00:07:56.300 Format NVM (80h): Supported LBA-Change 00:07:56.300 I/O Commands 00:07:56.300 ------------ 00:07:56.300 Flush (00h): Supported LBA-Change 00:07:56.300 Write (01h): Supported LBA-Change 00:07:56.300 Read (02h): Supported 00:07:56.300 Compare (05h): Supported 00:07:56.300 Write Zeroes (08h): Supported LBA-Change 00:07:56.300 Dataset Management (09h): Supported LBA-Change 00:07:56.300 Unknown (0Ch): Supported 00:07:56.300 Unknown (12h): Supported 00:07:56.300 Copy (19h): Supported LBA-Change 00:07:56.300 Unknown (1Dh): Supported LBA-Change 00:07:56.300 00:07:56.300 Error Log 00:07:56.300 ========= 00:07:56.300 00:07:56.300 Arbitration 00:07:56.300 =========== 00:07:56.300 Arbitration Burst: no limit 00:07:56.300 00:07:56.300 Power Management 00:07:56.300 ================ 00:07:56.300 Number of Power States: 1 00:07:56.300 Current Power State: Power State #0 00:07:56.300 Power State #0: 00:07:56.301 Max Power: 25.00 W 00:07:56.301 Non-Operational State: Operational 00:07:56.301 Entry Latency: 16 microseconds 00:07:56.301 Exit Latency: 4 microseconds 00:07:56.301 Relative Read Throughput: 0 00:07:56.301 Relative Read Latency: 0 00:07:56.301 Relative Write Throughput: 0 00:07:56.301 Relative Write Latency: 0 00:07:56.301 Idle Power: Not Reported 00:07:56.301 Active Power: Not Reported 00:07:56.301 Non-Operational Permissive Mode: Not Supported 00:07:56.301 00:07:56.301 Health Information 00:07:56.301 ================== 00:07:56.301 Critical Warnings: 00:07:56.301 Available Spare Space: OK 00:07:56.301 Temperature: OK 00:07:56.301 Device Reliability: OK 00:07:56.301 Read Only: No 00:07:56.301 Volatile Memory Backup: OK 00:07:56.301 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.301 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.301 Available Spare: 0% 00:07:56.301 Available Spare Threshold: 0% 00:07:56.301 Life Percentage Used: 0% 00:07:56.301 Data Units Read: 1031 00:07:56.301 Data Units Written: 960 00:07:56.301 Host Read Commands: 39331 00:07:56.301 Host Write Commands: 38755 00:07:56.301 Controller Busy Time: 0 minutes 00:07:56.301 Power Cycles: 0 00:07:56.301 Power On Hours: 0 hours 00:07:56.301 Unsafe Shutdowns: 0 00:07:56.301 Unrecoverable Media Errors: 0 00:07:56.301 Lifetime Error Log Entries: 0 00:07:56.301 Warning Temperature Time: 0 minutes 00:07:56.301 Critical Temperature Time: 0 minutes 00:07:56.301 00:07:56.301 Number of Queues 00:07:56.301 ================ 00:07:56.301 Number of I/O Submission Queues: 64 00:07:56.301 Number of I/O Completion Queues: 64 00:07:56.301 00:07:56.301 ZNS Specific Controller Data 00:07:56.301 ============================ 00:07:56.301 Zone Append Size Limit: 0 00:07:56.301 00:07:56.301 00:07:56.301 Active Namespaces 00:07:56.301 ================= 00:07:56.301 Namespace ID:1 00:07:56.301 Error Recovery Timeout: Unlimited 00:07:56.301 Command Set Identifier: NVM (00h) 00:07:56.301 Deallocate: Supported 00:07:56.301 Deallocated/Unwritten Error: Supported 00:07:56.301 Deallocated Read Value: All 0x00 00:07:56.301 Deallocate in Write Zeroes: Not Supported 00:07:56.301 Deallocated Guard Field: 0xFFFF 00:07:56.301 Flush: Supported 00:07:56.301 Reservation: Not Supported 00:07:56.301 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.301 Size (in LBAs): 262144 (1GiB) 00:07:56.301 Capacity (in LBAs): 262144 (1GiB) 00:07:56.301 Utilization (in LBAs): 262144 (1GiB) 00:07:56.301 Thin Provisioning: Not Supported 00:07:56.301 Per-NS Atomic Units: No 00:07:56.301 Maximum Single Source Range Length: 128 00:07:56.301 Maximum Copy Length: 128 00:07:56.301 Maximum Source Range Count: 128 00:07:56.301 NGUID/EUI64 Never Reused: No 00:07:56.301 Namespace Write Protected: No 00:07:56.301 Endurance group ID: 1 00:07:56.301 Number of LBA Formats: 8 00:07:56.301 Current LBA Format: LBA Format #04 00:07:56.301 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.301 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.301 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.301 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.301 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.301 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.301 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.301 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.301 00:07:56.301 Get Feature FDP: 00:07:56.301 ================ 00:07:56.301 Enabled: Yes 00:07:56.301 FDP configuration index: 0 00:07:56.301 00:07:56.301 FDP configurations log page 00:07:56.301 =========================== 00:07:56.301 Number of FDP configurations: 1 00:07:56.301 Version: 0 00:07:56.301 Size: 112 00:07:56.301 FDP Configuration Descriptor: 0 00:07:56.301 Descriptor Size: 96 00:07:56.301 Reclaim Group Identifier format: 2 00:07:56.301 FDP Volatile Write Cache: Not Present 00:07:56.301 FDP Configuration: Valid 00:07:56.301 Vendor Specific Size: 0 00:07:56.301 Number of Reclaim Groups: 2 00:07:56.301 Number of Recalim Unit Handles: 8 00:07:56.301 Max Placement Identifiers: 128 00:07:56.301 Number of Namespaces Suppprted: 256 00:07:56.301 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.301 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.301 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.301 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.301 00:07:56.301 FDP reclaim unit handle usage log page 00:07:56.301 ====================================== 00:07:56.301 Number of Reclaim Unit Handles: 8 00:07:56.301 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.301 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.301 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.301 00:07:56.301 FDP statistics log page 00:07:56.301 ======================= 00:07:56.301 Host bytes with metadata written: 582459392 00:07:56.301 Medi[2024-11-27 19:06:05.808294] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62883 terminated unexpected 00:07:56.301 a bytes with metadata written: 582537216 00:07:56.301 Media bytes erased: 0 00:07:56.301 00:07:56.301 FDP events log page 00:07:56.301 =================== 00:07:56.301 Number of FDP events: 0 00:07:56.301 00:07:56.301 NVM Specific Namespace Data 00:07:56.301 =========================== 00:07:56.301 Logical Block Storage Tag Mask: 0 00:07:56.301 Protection Information Capabilities: 00:07:56.301 16b Guard Protection Information Storage Tag Support: No 00:07:56.301 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.301 Storage Tag Check Read Support: No 00:07:56.301 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.301 ===================================================== 00:07:56.301 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.301 ===================================================== 00:07:56.301 Controller Capabilities/Features 00:07:56.301 ================================ 00:07:56.301 Vendor ID: 1b36 00:07:56.301 Subsystem Vendor ID: 1af4 00:07:56.301 Serial Number: 12342 00:07:56.301 Model Number: QEMU NVMe Ctrl 00:07:56.301 Firmware Version: 8.0.0 00:07:56.301 Recommended Arb Burst: 6 00:07:56.301 IEEE OUI Identifier: 00 54 52 00:07:56.301 Multi-path I/O 00:07:56.301 May have multiple subsystem ports: No 00:07:56.301 May have multiple controllers: No 00:07:56.301 Associated with SR-IOV VF: No 00:07:56.301 Max Data Transfer Size: 524288 00:07:56.301 Max Number of Namespaces: 256 00:07:56.301 Max Number of I/O Queues: 64 00:07:56.301 NVMe Specification Version (VS): 1.4 00:07:56.301 NVMe Specification Version (Identify): 1.4 00:07:56.301 Maximum Queue Entries: 2048 00:07:56.301 Contiguous Queues Required: Yes 00:07:56.301 Arbitration Mechanisms Supported 00:07:56.301 Weighted Round Robin: Not Supported 00:07:56.301 Vendor Specific: Not Supported 00:07:56.301 Reset Timeout: 7500 ms 00:07:56.301 Doorbell Stride: 4 bytes 00:07:56.302 NVM Subsystem Reset: Not Supported 00:07:56.302 Command Sets Supported 00:07:56.302 NVM Command Set: Supported 00:07:56.302 Boot Partition: Not Supported 00:07:56.302 Memory Page Size Minimum: 4096 bytes 00:07:56.302 Memory Page Size Maximum: 65536 bytes 00:07:56.302 Persistent Memory Region: Not Supported 00:07:56.302 Optional Asynchronous Events Supported 00:07:56.302 Namespace Attribute Notices: Supported 00:07:56.302 Firmware Activation Notices: Not Supported 00:07:56.302 ANA Change Notices: Not Supported 00:07:56.302 PLE Aggregate Log Change Notices: Not Supported 00:07:56.302 LBA Status Info Alert Notices: Not Supported 00:07:56.302 EGE Aggregate Log Change Notices: Not Supported 00:07:56.302 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.302 Zone Descriptor Change Notices: Not Supported 00:07:56.302 Discovery Log Change Notices: Not Supported 00:07:56.302 Controller Attributes 00:07:56.302 128-bit Host Identifier: Not Supported 00:07:56.302 Non-Operational Permissive Mode: Not Supported 00:07:56.302 NVM Sets: Not Supported 00:07:56.302 Read Recovery Levels: Not Supported 00:07:56.302 Endurance Groups: Not Supported 00:07:56.302 Predictable Latency Mode: Not Supported 00:07:56.302 Traffic Based Keep ALive: Not Supported 00:07:56.302 Namespace Granularity: Not Supported 00:07:56.302 SQ Associations: Not Supported 00:07:56.302 UUID List: Not Supported 00:07:56.302 Multi-Domain Subsystem: Not Supported 00:07:56.302 Fixed Capacity Management: Not Supported 00:07:56.302 Variable Capacity Management: Not Supported 00:07:56.302 Delete Endurance Group: Not Supported 00:07:56.302 Delete NVM Set: Not Supported 00:07:56.302 Extended LBA Formats Supported: Supported 00:07:56.302 Flexible Data Placement Supported: Not Supported 00:07:56.302 00:07:56.302 Controller Memory Buffer Support 00:07:56.302 ================================ 00:07:56.302 Supported: No 00:07:56.302 00:07:56.302 Persistent Memory Region Support 00:07:56.302 ================================ 00:07:56.302 Supported: No 00:07:56.302 00:07:56.302 Admin Command Set Attributes 00:07:56.302 ============================ 00:07:56.302 Security Send/Receive: Not Supported 00:07:56.302 Format NVM: Supported 00:07:56.302 Firmware Activate/Download: Not Supported 00:07:56.302 Namespace Management: Supported 00:07:56.302 Device Self-Test: Not Supported 00:07:56.302 Directives: Supported 00:07:56.302 NVMe-MI: Not Supported 00:07:56.302 Virtualization Management: Not Supported 00:07:56.302 Doorbell Buffer Config: Supported 00:07:56.302 Get LBA Status Capability: Not Supported 00:07:56.302 Command & Feature Lockdown Capability: Not Supported 00:07:56.302 Abort Command Limit: 4 00:07:56.302 Async Event Request Limit: 4 00:07:56.302 Number of Firmware Slots: N/A 00:07:56.302 Firmware Slot 1 Read-Only: N/A 00:07:56.302 Firmware Activation Without Reset: N/A 00:07:56.302 Multiple Update Detection Support: N/A 00:07:56.302 Firmware Update Granularity: No Information Provided 00:07:56.302 Per-Namespace SMART Log: Yes 00:07:56.302 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.302 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.302 Command Effects Log Page: Supported 00:07:56.302 Get Log Page Extended Data: Supported 00:07:56.302 Telemetry Log Pages: Not Supported 00:07:56.302 Persistent Event Log Pages: Not Supported 00:07:56.302 Supported Log Pages Log Page: May Support 00:07:56.302 Commands Supported & Effects Log Page: Not Supported 00:07:56.302 Feature Identifiers & Effects Log Page:May Support 00:07:56.302 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.302 Data Area 4 for Telemetry Log: Not Supported 00:07:56.302 Error Log Page Entries Supported: 1 00:07:56.302 Keep Alive: Not Supported 00:07:56.302 00:07:56.302 NVM Command Set Attributes 00:07:56.302 ========================== 00:07:56.302 Submission Queue Entry Size 00:07:56.302 Max: 64 00:07:56.302 Min: 64 00:07:56.302 Completion Queue Entry Size 00:07:56.302 Max: 16 00:07:56.302 Min: 16 00:07:56.302 Number of Namespaces: 256 00:07:56.302 Compare Command: Supported 00:07:56.302 Write Uncorrectable Command: Not Supported 00:07:56.302 Dataset Management Command: Supported 00:07:56.302 Write Zeroes Command: Supported 00:07:56.302 Set Features Save Field: Supported 00:07:56.302 Reservations: Not Supported 00:07:56.302 Timestamp: Supported 00:07:56.302 Copy: Supported 00:07:56.302 Volatile Write Cache: Present 00:07:56.302 Atomic Write Unit (Normal): 1 00:07:56.302 Atomic Write Unit (PFail): 1 00:07:56.302 Atomic Compare & Write Unit: 1 00:07:56.302 Fused Compare & Write: Not Supported 00:07:56.302 Scatter-Gather List 00:07:56.302 SGL Command Set: Supported 00:07:56.302 SGL Keyed: Not Supported 00:07:56.302 SGL Bit Bucket Descriptor: Not Supported 00:07:56.302 SGL Metadata Pointer: Not Supported 00:07:56.302 Oversized SGL: Not Supported 00:07:56.302 SGL Metadata Address: Not Supported 00:07:56.302 SGL Offset: Not Supported 00:07:56.302 Transport SGL Data Block: Not Supported 00:07:56.302 Replay Protected Memory Block: Not Supported 00:07:56.302 00:07:56.302 Firmware Slot Information 00:07:56.302 ========================= 00:07:56.302 Active slot: 1 00:07:56.302 Slot 1 Firmware Revision: 1.0 00:07:56.302 00:07:56.302 00:07:56.302 Commands Supported and Effects 00:07:56.302 ============================== 00:07:56.302 Admin Commands 00:07:56.302 -------------- 00:07:56.302 Delete I/O Submission Queue (00h): Supported 00:07:56.302 Create I/O Submission Queue (01h): Supported 00:07:56.302 Get Log Page (02h): Supported 00:07:56.302 Delete I/O Completion Queue (04h): Supported 00:07:56.302 Create I/O Completion Queue (05h): Supported 00:07:56.302 Identify (06h): Supported 00:07:56.302 Abort (08h): Supported 00:07:56.302 Set Features (09h): Supported 00:07:56.302 Get Features (0Ah): Supported 00:07:56.302 Asynchronous Event Request (0Ch): Supported 00:07:56.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.302 Directive Send (19h): Supported 00:07:56.302 Directive Receive (1Ah): Supported 00:07:56.302 Virtualization Management (1Ch): Supported 00:07:56.302 Doorbell Buffer Config (7Ch): Supported 00:07:56.302 Format NVM (80h): Supported LBA-Change 00:07:56.302 I/O Commands 00:07:56.302 ------------ 00:07:56.302 Flush (00h): Supported LBA-Change 00:07:56.302 Write (01h): Supported LBA-Change 00:07:56.302 Read (02h): Supported 00:07:56.302 Compare (05h): Supported 00:07:56.302 Write Zeroes (08h): Supported LBA-Change 00:07:56.302 Dataset Management (09h): Supported LBA-Change 00:07:56.302 Unknown (0Ch): Supported 00:07:56.302 Unknown (12h): Supported 00:07:56.302 Copy (19h): Supported LBA-Change 00:07:56.302 Unknown (1Dh): Supported LBA-Change 00:07:56.302 00:07:56.302 Error Log 00:07:56.303 ========= 00:07:56.303 00:07:56.303 Arbitration 00:07:56.303 =========== 00:07:56.303 Arbitration Burst: no limit 00:07:56.303 00:07:56.303 Power Management 00:07:56.303 ================ 00:07:56.303 Number of Power States: 1 00:07:56.303 Current Power State: Power State #0 00:07:56.303 Power State #0: 00:07:56.303 Max Power: 25.00 W 00:07:56.303 Non-Operational State: Operational 00:07:56.303 Entry Latency: 16 microseconds 00:07:56.303 Exit Latency: 4 microseconds 00:07:56.303 Relative Read Throughput: 0 00:07:56.303 Relative Read Latency: 0 00:07:56.303 Relative Write Throughput: 0 00:07:56.303 Relative Write Latency: 0 00:07:56.303 Idle Power: Not Reported 00:07:56.303 Active Power: Not Reported 00:07:56.303 Non-Operational Permissive Mode: Not Supported 00:07:56.303 00:07:56.303 Health Information 00:07:56.303 ================== 00:07:56.303 Critical Warnings: 00:07:56.303 Available Spare Space: OK 00:07:56.303 Temperature: OK 00:07:56.303 Device Reliability: OK 00:07:56.303 Read Only: No 00:07:56.303 Volatile Memory Backup: OK 00:07:56.303 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.303 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.303 Available Spare: 0% 00:07:56.303 Available Spare Threshold: 0% 00:07:56.303 Life Percentage Used: 0% 00:07:56.303 Data Units Read: 2239 00:07:56.303 Data Units Written: 2026 00:07:56.303 Host Read Commands: 111032 00:07:56.303 Host Write Commands: 109301 00:07:56.303 Controller Busy Time: 0 minutes 00:07:56.303 Power Cycles: 0 00:07:56.303 Power On Hours: 0 hours 00:07:56.303 Unsafe Shutdowns: 0 00:07:56.303 Unrecoverable Media Errors: 0 00:07:56.303 Lifetime Error Log Entries: 0 00:07:56.303 Warning Temperature Time: 0 minutes 00:07:56.303 Critical Temperature Time: 0 minutes 00:07:56.303 00:07:56.303 Number of Queues 00:07:56.303 ================ 00:07:56.303 Number of I/O Submission Queues: 64 00:07:56.303 Number of I/O Completion Queues: 64 00:07:56.303 00:07:56.303 ZNS Specific Controller Data 00:07:56.303 ============================ 00:07:56.303 Zone Append Size Limit: 0 00:07:56.303 00:07:56.303 00:07:56.303 Active Namespaces 00:07:56.303 ================= 00:07:56.303 Namespace ID:1 00:07:56.303 Error Recovery Timeout: Unlimited 00:07:56.303 Command Set Identifier: NVM (00h) 00:07:56.303 Deallocate: Supported 00:07:56.303 Deallocated/Unwritten Error: Supported 00:07:56.303 Deallocated Read Value: All 0x00 00:07:56.303 Deallocate in Write Zeroes: Not Supported 00:07:56.303 Deallocated Guard Field: 0xFFFF 00:07:56.303 Flush: Supported 00:07:56.303 Reservation: Not Supported 00:07:56.303 Namespace Sharing Capabilities: Private 00:07:56.303 Size (in LBAs): 1048576 (4GiB) 00:07:56.303 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.303 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.303 Thin Provisioning: Not Supported 00:07:56.303 Per-NS Atomic Units: No 00:07:56.303 Maximum Single Source Range Length: 128 00:07:56.303 Maximum Copy Length: 128 00:07:56.303 Maximum Source Range Count: 128 00:07:56.303 NGUID/EUI64 Never Reused: No 00:07:56.303 Namespace Write Protected: No 00:07:56.303 Number of LBA Formats: 8 00:07:56.303 Current LBA Format: LBA Format #04 00:07:56.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.303 00:07:56.303 NVM Specific Namespace Data 00:07:56.303 =========================== 00:07:56.303 Logical Block Storage Tag Mask: 0 00:07:56.303 Protection Information Capabilities: 00:07:56.303 16b Guard Protection Information Storage Tag Support: No 00:07:56.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.303 Storage Tag Check Read Support: No 00:07:56.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Namespace ID:2 00:07:56.303 Error Recovery Timeout: Unlimited 00:07:56.303 Command Set Identifier: NVM (00h) 00:07:56.303 Deallocate: Supported 00:07:56.303 Deallocated/Unwritten Error: Supported 00:07:56.303 Deallocated Read Value: All 0x00 00:07:56.303 Deallocate in Write Zeroes: Not Supported 00:07:56.303 Deallocated Guard Field: 0xFFFF 00:07:56.303 Flush: Supported 00:07:56.303 Reservation: Not Supported 00:07:56.303 Namespace Sharing Capabilities: Private 00:07:56.303 Size (in LBAs): 1048576 (4GiB) 00:07:56.303 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.303 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.303 Thin Provisioning: Not Supported 00:07:56.303 Per-NS Atomic Units: No 00:07:56.303 Maximum Single Source Range Length: 128 00:07:56.303 Maximum Copy Length: 128 00:07:56.303 Maximum Source Range Count: 128 00:07:56.303 NGUID/EUI64 Never Reused: No 00:07:56.303 Namespace Write Protected: No 00:07:56.303 Number of LBA Formats: 8 00:07:56.303 Current LBA Format: LBA Format #04 00:07:56.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.303 00:07:56.303 NVM Specific Namespace Data 00:07:56.303 =========================== 00:07:56.303 Logical Block Storage Tag Mask: 0 00:07:56.303 Protection Information Capabilities: 00:07:56.303 16b Guard Protection Information Storage Tag Support: No 00:07:56.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.303 Storage Tag Check Read Support: No 00:07:56.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.303 Namespace ID:3 00:07:56.303 Error Recovery Timeout: Unlimited 00:07:56.303 Command Set Identifier: NVM (00h) 00:07:56.303 Deallocate: Supported 00:07:56.303 Deallocated/Unwritten Error: Supported 00:07:56.303 Deallocated Read Value: All 0x00 00:07:56.303 Deallocate in Write Zeroes: Not Supported 00:07:56.303 Deallocated Guard Field: 0xFFFF 00:07:56.303 Flush: Supported 00:07:56.303 Reservation: Not Supported 00:07:56.303 Namespace Sharing Capabilities: Private 00:07:56.303 Size (in LBAs): 1048576 (4GiB) 00:07:56.303 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.303 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.303 Thin Provisioning: Not Supported 00:07:56.303 Per-NS Atomic Units: No 00:07:56.303 Maximum Single Source Range Length: 128 00:07:56.303 Maximum Copy Length: 128 00:07:56.303 Maximum Source Range Count: 128 00:07:56.303 NGUID/EUI64 Never Reused: No 00:07:56.303 Namespace Write Protected: No 00:07:56.303 Number of LBA Formats: 8 00:07:56.303 Current LBA Format: LBA Format #04 00:07:56.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.303 00:07:56.303 NVM Specific Namespace Data 00:07:56.303 =========================== 00:07:56.303 Logical Block Storage Tag Mask: 0 00:07:56.304 Protection Information Capabilities: 00:07:56.304 16b Guard Protection Information Storage Tag Support: No 00:07:56.304 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.304 Storage Tag Check Read Support: No 00:07:56.304 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.304 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.304 19:06:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:56.563 ===================================================== 00:07:56.563 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.563 ===================================================== 00:07:56.563 Controller Capabilities/Features 00:07:56.563 ================================ 00:07:56.563 Vendor ID: 1b36 00:07:56.563 Subsystem Vendor ID: 1af4 00:07:56.563 Serial Number: 12340 00:07:56.563 Model Number: QEMU NVMe Ctrl 00:07:56.563 Firmware Version: 8.0.0 00:07:56.563 Recommended Arb Burst: 6 00:07:56.563 IEEE OUI Identifier: 00 54 52 00:07:56.563 Multi-path I/O 00:07:56.563 May have multiple subsystem ports: No 00:07:56.563 May have multiple controllers: No 00:07:56.563 Associated with SR-IOV VF: No 00:07:56.563 Max Data Transfer Size: 524288 00:07:56.563 Max Number of Namespaces: 256 00:07:56.563 Max Number of I/O Queues: 64 00:07:56.563 NVMe Specification Version (VS): 1.4 00:07:56.563 NVMe Specification Version (Identify): 1.4 00:07:56.563 Maximum Queue Entries: 2048 00:07:56.563 Contiguous Queues Required: Yes 00:07:56.563 Arbitration Mechanisms Supported 00:07:56.563 Weighted Round Robin: Not Supported 00:07:56.563 Vendor Specific: Not Supported 00:07:56.563 Reset Timeout: 7500 ms 00:07:56.563 Doorbell Stride: 4 bytes 00:07:56.563 NVM Subsystem Reset: Not Supported 00:07:56.563 Command Sets Supported 00:07:56.563 NVM Command Set: Supported 00:07:56.563 Boot Partition: Not Supported 00:07:56.563 Memory Page Size Minimum: 4096 bytes 00:07:56.563 Memory Page Size Maximum: 65536 bytes 00:07:56.563 Persistent Memory Region: Not Supported 00:07:56.563 Optional Asynchronous Events Supported 00:07:56.563 Namespace Attribute Notices: Supported 00:07:56.563 Firmware Activation Notices: Not Supported 00:07:56.563 ANA Change Notices: Not Supported 00:07:56.563 PLE Aggregate Log Change Notices: Not Supported 00:07:56.563 LBA Status Info Alert Notices: Not Supported 00:07:56.563 EGE Aggregate Log Change Notices: Not Supported 00:07:56.563 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.563 Zone Descriptor Change Notices: Not Supported 00:07:56.563 Discovery Log Change Notices: Not Supported 00:07:56.563 Controller Attributes 00:07:56.563 128-bit Host Identifier: Not Supported 00:07:56.563 Non-Operational Permissive Mode: Not Supported 00:07:56.563 NVM Sets: Not Supported 00:07:56.563 Read Recovery Levels: Not Supported 00:07:56.563 Endurance Groups: Not Supported 00:07:56.563 Predictable Latency Mode: Not Supported 00:07:56.563 Traffic Based Keep ALive: Not Supported 00:07:56.563 Namespace Granularity: Not Supported 00:07:56.563 SQ Associations: Not Supported 00:07:56.563 UUID List: Not Supported 00:07:56.563 Multi-Domain Subsystem: Not Supported 00:07:56.563 Fixed Capacity Management: Not Supported 00:07:56.563 Variable Capacity Management: Not Supported 00:07:56.563 Delete Endurance Group: Not Supported 00:07:56.563 Delete NVM Set: Not Supported 00:07:56.563 Extended LBA Formats Supported: Supported 00:07:56.563 Flexible Data Placement Supported: Not Supported 00:07:56.563 00:07:56.563 Controller Memory Buffer Support 00:07:56.563 ================================ 00:07:56.563 Supported: No 00:07:56.563 00:07:56.563 Persistent Memory Region Support 00:07:56.563 ================================ 00:07:56.563 Supported: No 00:07:56.563 00:07:56.563 Admin Command Set Attributes 00:07:56.563 ============================ 00:07:56.563 Security Send/Receive: Not Supported 00:07:56.563 Format NVM: Supported 00:07:56.563 Firmware Activate/Download: Not Supported 00:07:56.563 Namespace Management: Supported 00:07:56.563 Device Self-Test: Not Supported 00:07:56.563 Directives: Supported 00:07:56.563 NVMe-MI: Not Supported 00:07:56.563 Virtualization Management: Not Supported 00:07:56.563 Doorbell Buffer Config: Supported 00:07:56.563 Get LBA Status Capability: Not Supported 00:07:56.563 Command & Feature Lockdown Capability: Not Supported 00:07:56.563 Abort Command Limit: 4 00:07:56.563 Async Event Request Limit: 4 00:07:56.563 Number of Firmware Slots: N/A 00:07:56.563 Firmware Slot 1 Read-Only: N/A 00:07:56.563 Firmware Activation Without Reset: N/A 00:07:56.563 Multiple Update Detection Support: N/A 00:07:56.563 Firmware Update Granularity: No Information Provided 00:07:56.563 Per-Namespace SMART Log: Yes 00:07:56.563 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.563 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.563 Command Effects Log Page: Supported 00:07:56.563 Get Log Page Extended Data: Supported 00:07:56.563 Telemetry Log Pages: Not Supported 00:07:56.563 Persistent Event Log Pages: Not Supported 00:07:56.563 Supported Log Pages Log Page: May Support 00:07:56.563 Commands Supported & Effects Log Page: Not Supported 00:07:56.563 Feature Identifiers & Effects Log Page:May Support 00:07:56.563 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.563 Data Area 4 for Telemetry Log: Not Supported 00:07:56.563 Error Log Page Entries Supported: 1 00:07:56.563 Keep Alive: Not Supported 00:07:56.563 00:07:56.563 NVM Command Set Attributes 00:07:56.563 ========================== 00:07:56.563 Submission Queue Entry Size 00:07:56.563 Max: 64 00:07:56.563 Min: 64 00:07:56.563 Completion Queue Entry Size 00:07:56.563 Max: 16 00:07:56.563 Min: 16 00:07:56.563 Number of Namespaces: 256 00:07:56.563 Compare Command: Supported 00:07:56.563 Write Uncorrectable Command: Not Supported 00:07:56.563 Dataset Management Command: Supported 00:07:56.563 Write Zeroes Command: Supported 00:07:56.563 Set Features Save Field: Supported 00:07:56.563 Reservations: Not Supported 00:07:56.563 Timestamp: Supported 00:07:56.563 Copy: Supported 00:07:56.563 Volatile Write Cache: Present 00:07:56.563 Atomic Write Unit (Normal): 1 00:07:56.563 Atomic Write Unit (PFail): 1 00:07:56.563 Atomic Compare & Write Unit: 1 00:07:56.563 Fused Compare & Write: Not Supported 00:07:56.563 Scatter-Gather List 00:07:56.563 SGL Command Set: Supported 00:07:56.563 SGL Keyed: Not Supported 00:07:56.563 SGL Bit Bucket Descriptor: Not Supported 00:07:56.563 SGL Metadata Pointer: Not Supported 00:07:56.563 Oversized SGL: Not Supported 00:07:56.563 SGL Metadata Address: Not Supported 00:07:56.563 SGL Offset: Not Supported 00:07:56.563 Transport SGL Data Block: Not Supported 00:07:56.563 Replay Protected Memory Block: Not Supported 00:07:56.563 00:07:56.563 Firmware Slot Information 00:07:56.563 ========================= 00:07:56.563 Active slot: 1 00:07:56.563 Slot 1 Firmware Revision: 1.0 00:07:56.563 00:07:56.563 00:07:56.563 Commands Supported and Effects 00:07:56.563 ============================== 00:07:56.563 Admin Commands 00:07:56.563 -------------- 00:07:56.563 Delete I/O Submission Queue (00h): Supported 00:07:56.563 Create I/O Submission Queue (01h): Supported 00:07:56.563 Get Log Page (02h): Supported 00:07:56.563 Delete I/O Completion Queue (04h): Supported 00:07:56.563 Create I/O Completion Queue (05h): Supported 00:07:56.563 Identify (06h): Supported 00:07:56.563 Abort (08h): Supported 00:07:56.563 Set Features (09h): Supported 00:07:56.563 Get Features (0Ah): Supported 00:07:56.563 Asynchronous Event Request (0Ch): Supported 00:07:56.563 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.563 Directive Send (19h): Supported 00:07:56.563 Directive Receive (1Ah): Supported 00:07:56.563 Virtualization Management (1Ch): Supported 00:07:56.563 Doorbell Buffer Config (7Ch): Supported 00:07:56.563 Format NVM (80h): Supported LBA-Change 00:07:56.563 I/O Commands 00:07:56.563 ------------ 00:07:56.563 Flush (00h): Supported LBA-Change 00:07:56.563 Write (01h): Supported LBA-Change 00:07:56.563 Read (02h): Supported 00:07:56.563 Compare (05h): Supported 00:07:56.563 Write Zeroes (08h): Supported LBA-Change 00:07:56.563 Dataset Management (09h): Supported LBA-Change 00:07:56.563 Unknown (0Ch): Supported 00:07:56.563 Unknown (12h): Supported 00:07:56.563 Copy (19h): Supported LBA-Change 00:07:56.563 Unknown (1Dh): Supported LBA-Change 00:07:56.563 00:07:56.563 Error Log 00:07:56.563 ========= 00:07:56.563 00:07:56.563 Arbitration 00:07:56.563 =========== 00:07:56.563 Arbitration Burst: no limit 00:07:56.563 00:07:56.563 Power Management 00:07:56.563 ================ 00:07:56.563 Number of Power States: 1 00:07:56.563 Current Power State: Power State #0 00:07:56.563 Power State #0: 00:07:56.563 Max Power: 25.00 W 00:07:56.564 Non-Operational State: Operational 00:07:56.564 Entry Latency: 16 microseconds 00:07:56.564 Exit Latency: 4 microseconds 00:07:56.564 Relative Read Throughput: 0 00:07:56.564 Relative Read Latency: 0 00:07:56.564 Relative Write Throughput: 0 00:07:56.564 Relative Write Latency: 0 00:07:56.564 Idle Power: Not Reported 00:07:56.564 Active Power: Not Reported 00:07:56.564 Non-Operational Permissive Mode: Not Supported 00:07:56.564 00:07:56.564 Health Information 00:07:56.564 ================== 00:07:56.564 Critical Warnings: 00:07:56.564 Available Spare Space: OK 00:07:56.564 Temperature: OK 00:07:56.564 Device Reliability: OK 00:07:56.564 Read Only: No 00:07:56.564 Volatile Memory Backup: OK 00:07:56.564 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.564 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.564 Available Spare: 0% 00:07:56.564 Available Spare Threshold: 0% 00:07:56.564 Life Percentage Used: 0% 00:07:56.564 Data Units Read: 641 00:07:56.564 Data Units Written: 569 00:07:56.564 Host Read Commands: 35926 00:07:56.564 Host Write Commands: 35712 00:07:56.564 Controller Busy Time: 0 minutes 00:07:56.564 Power Cycles: 0 00:07:56.564 Power On Hours: 0 hours 00:07:56.564 Unsafe Shutdowns: 0 00:07:56.564 Unrecoverable Media Errors: 0 00:07:56.564 Lifetime Error Log Entries: 0 00:07:56.564 Warning Temperature Time: 0 minutes 00:07:56.564 Critical Temperature Time: 0 minutes 00:07:56.564 00:07:56.564 Number of Queues 00:07:56.564 ================ 00:07:56.564 Number of I/O Submission Queues: 64 00:07:56.564 Number of I/O Completion Queues: 64 00:07:56.564 00:07:56.564 ZNS Specific Controller Data 00:07:56.564 ============================ 00:07:56.564 Zone Append Size Limit: 0 00:07:56.564 00:07:56.564 00:07:56.564 Active Namespaces 00:07:56.564 ================= 00:07:56.564 Namespace ID:1 00:07:56.564 Error Recovery Timeout: Unlimited 00:07:56.564 Command Set Identifier: NVM (00h) 00:07:56.564 Deallocate: Supported 00:07:56.564 Deallocated/Unwritten Error: Supported 00:07:56.564 Deallocated Read Value: All 0x00 00:07:56.564 Deallocate in Write Zeroes: Not Supported 00:07:56.564 Deallocated Guard Field: 0xFFFF 00:07:56.564 Flush: Supported 00:07:56.564 Reservation: Not Supported 00:07:56.564 Metadata Transferred as: Separate Metadata Buffer 00:07:56.564 Namespace Sharing Capabilities: Private 00:07:56.564 Size (in LBAs): 1548666 (5GiB) 00:07:56.564 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.564 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.564 Thin Provisioning: Not Supported 00:07:56.564 Per-NS Atomic Units: No 00:07:56.564 Maximum Single Source Range Length: 128 00:07:56.564 Maximum Copy Length: 128 00:07:56.564 Maximum Source Range Count: 128 00:07:56.564 NGUID/EUI64 Never Reused: No 00:07:56.564 Namespace Write Protected: No 00:07:56.564 Number of LBA Formats: 8 00:07:56.564 Current LBA Format: LBA Format #07 00:07:56.564 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.564 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.564 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.564 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.564 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.564 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.564 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.564 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.564 00:07:56.564 NVM Specific Namespace Data 00:07:56.564 =========================== 00:07:56.564 Logical Block Storage Tag Mask: 0 00:07:56.564 Protection Information Capabilities: 00:07:56.564 16b Guard Protection Information Storage Tag Support: No 00:07:56.564 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.564 Storage Tag Check Read Support: No 00:07:56.564 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.564 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.564 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:56.823 ===================================================== 00:07:56.823 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.823 ===================================================== 00:07:56.823 Controller Capabilities/Features 00:07:56.823 ================================ 00:07:56.823 Vendor ID: 1b36 00:07:56.823 Subsystem Vendor ID: 1af4 00:07:56.823 Serial Number: 12341 00:07:56.823 Model Number: QEMU NVMe Ctrl 00:07:56.823 Firmware Version: 8.0.0 00:07:56.823 Recommended Arb Burst: 6 00:07:56.823 IEEE OUI Identifier: 00 54 52 00:07:56.823 Multi-path I/O 00:07:56.823 May have multiple subsystem ports: No 00:07:56.823 May have multiple controllers: No 00:07:56.823 Associated with SR-IOV VF: No 00:07:56.823 Max Data Transfer Size: 524288 00:07:56.823 Max Number of Namespaces: 256 00:07:56.823 Max Number of I/O Queues: 64 00:07:56.823 NVMe Specification Version (VS): 1.4 00:07:56.823 NVMe Specification Version (Identify): 1.4 00:07:56.823 Maximum Queue Entries: 2048 00:07:56.823 Contiguous Queues Required: Yes 00:07:56.823 Arbitration Mechanisms Supported 00:07:56.823 Weighted Round Robin: Not Supported 00:07:56.823 Vendor Specific: Not Supported 00:07:56.823 Reset Timeout: 7500 ms 00:07:56.823 Doorbell Stride: 4 bytes 00:07:56.823 NVM Subsystem Reset: Not Supported 00:07:56.823 Command Sets Supported 00:07:56.823 NVM Command Set: Supported 00:07:56.823 Boot Partition: Not Supported 00:07:56.823 Memory Page Size Minimum: 4096 bytes 00:07:56.823 Memory Page Size Maximum: 65536 bytes 00:07:56.823 Persistent Memory Region: Not Supported 00:07:56.823 Optional Asynchronous Events Supported 00:07:56.823 Namespace Attribute Notices: Supported 00:07:56.823 Firmware Activation Notices: Not Supported 00:07:56.823 ANA Change Notices: Not Supported 00:07:56.823 PLE Aggregate Log Change Notices: Not Supported 00:07:56.823 LBA Status Info Alert Notices: Not Supported 00:07:56.823 EGE Aggregate Log Change Notices: Not Supported 00:07:56.823 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.823 Zone Descriptor Change Notices: Not Supported 00:07:56.823 Discovery Log Change Notices: Not Supported 00:07:56.823 Controller Attributes 00:07:56.823 128-bit Host Identifier: Not Supported 00:07:56.823 Non-Operational Permissive Mode: Not Supported 00:07:56.823 NVM Sets: Not Supported 00:07:56.823 Read Recovery Levels: Not Supported 00:07:56.823 Endurance Groups: Not Supported 00:07:56.823 Predictable Latency Mode: Not Supported 00:07:56.823 Traffic Based Keep ALive: Not Supported 00:07:56.823 Namespace Granularity: Not Supported 00:07:56.823 SQ Associations: Not Supported 00:07:56.823 UUID List: Not Supported 00:07:56.823 Multi-Domain Subsystem: Not Supported 00:07:56.823 Fixed Capacity Management: Not Supported 00:07:56.823 Variable Capacity Management: Not Supported 00:07:56.823 Delete Endurance Group: Not Supported 00:07:56.823 Delete NVM Set: Not Supported 00:07:56.823 Extended LBA Formats Supported: Supported 00:07:56.823 Flexible Data Placement Supported: Not Supported 00:07:56.823 00:07:56.823 Controller Memory Buffer Support 00:07:56.823 ================================ 00:07:56.823 Supported: No 00:07:56.823 00:07:56.823 Persistent Memory Region Support 00:07:56.823 ================================ 00:07:56.823 Supported: No 00:07:56.823 00:07:56.823 Admin Command Set Attributes 00:07:56.823 ============================ 00:07:56.823 Security Send/Receive: Not Supported 00:07:56.823 Format NVM: Supported 00:07:56.823 Firmware Activate/Download: Not Supported 00:07:56.823 Namespace Management: Supported 00:07:56.823 Device Self-Test: Not Supported 00:07:56.823 Directives: Supported 00:07:56.823 NVMe-MI: Not Supported 00:07:56.823 Virtualization Management: Not Supported 00:07:56.823 Doorbell Buffer Config: Supported 00:07:56.823 Get LBA Status Capability: Not Supported 00:07:56.823 Command & Feature Lockdown Capability: Not Supported 00:07:56.823 Abort Command Limit: 4 00:07:56.823 Async Event Request Limit: 4 00:07:56.823 Number of Firmware Slots: N/A 00:07:56.823 Firmware Slot 1 Read-Only: N/A 00:07:56.823 Firmware Activation Without Reset: N/A 00:07:56.823 Multiple Update Detection Support: N/A 00:07:56.823 Firmware Update Granularity: No Information Provided 00:07:56.823 Per-Namespace SMART Log: Yes 00:07:56.823 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.823 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.823 Command Effects Log Page: Supported 00:07:56.823 Get Log Page Extended Data: Supported 00:07:56.823 Telemetry Log Pages: Not Supported 00:07:56.824 Persistent Event Log Pages: Not Supported 00:07:56.824 Supported Log Pages Log Page: May Support 00:07:56.824 Commands Supported & Effects Log Page: Not Supported 00:07:56.824 Feature Identifiers & Effects Log Page:May Support 00:07:56.824 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.824 Data Area 4 for Telemetry Log: Not Supported 00:07:56.824 Error Log Page Entries Supported: 1 00:07:56.824 Keep Alive: Not Supported 00:07:56.824 00:07:56.824 NVM Command Set Attributes 00:07:56.824 ========================== 00:07:56.824 Submission Queue Entry Size 00:07:56.824 Max: 64 00:07:56.824 Min: 64 00:07:56.824 Completion Queue Entry Size 00:07:56.824 Max: 16 00:07:56.824 Min: 16 00:07:56.824 Number of Namespaces: 256 00:07:56.824 Compare Command: Supported 00:07:56.824 Write Uncorrectable Command: Not Supported 00:07:56.824 Dataset Management Command: Supported 00:07:56.824 Write Zeroes Command: Supported 00:07:56.824 Set Features Save Field: Supported 00:07:56.824 Reservations: Not Supported 00:07:56.824 Timestamp: Supported 00:07:56.824 Copy: Supported 00:07:56.824 Volatile Write Cache: Present 00:07:56.824 Atomic Write Unit (Normal): 1 00:07:56.824 Atomic Write Unit (PFail): 1 00:07:56.824 Atomic Compare & Write Unit: 1 00:07:56.824 Fused Compare & Write: Not Supported 00:07:56.824 Scatter-Gather List 00:07:56.824 SGL Command Set: Supported 00:07:56.824 SGL Keyed: Not Supported 00:07:56.824 SGL Bit Bucket Descriptor: Not Supported 00:07:56.824 SGL Metadata Pointer: Not Supported 00:07:56.824 Oversized SGL: Not Supported 00:07:56.824 SGL Metadata Address: Not Supported 00:07:56.824 SGL Offset: Not Supported 00:07:56.824 Transport SGL Data Block: Not Supported 00:07:56.824 Replay Protected Memory Block: Not Supported 00:07:56.824 00:07:56.824 Firmware Slot Information 00:07:56.824 ========================= 00:07:56.824 Active slot: 1 00:07:56.824 Slot 1 Firmware Revision: 1.0 00:07:56.824 00:07:56.824 00:07:56.824 Commands Supported and Effects 00:07:56.824 ============================== 00:07:56.824 Admin Commands 00:07:56.824 -------------- 00:07:56.824 Delete I/O Submission Queue (00h): Supported 00:07:56.824 Create I/O Submission Queue (01h): Supported 00:07:56.824 Get Log Page (02h): Supported 00:07:56.824 Delete I/O Completion Queue (04h): Supported 00:07:56.824 Create I/O Completion Queue (05h): Supported 00:07:56.824 Identify (06h): Supported 00:07:56.824 Abort (08h): Supported 00:07:56.824 Set Features (09h): Supported 00:07:56.824 Get Features (0Ah): Supported 00:07:56.824 Asynchronous Event Request (0Ch): Supported 00:07:56.824 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.824 Directive Send (19h): Supported 00:07:56.824 Directive Receive (1Ah): Supported 00:07:56.824 Virtualization Management (1Ch): Supported 00:07:56.824 Doorbell Buffer Config (7Ch): Supported 00:07:56.824 Format NVM (80h): Supported LBA-Change 00:07:56.824 I/O Commands 00:07:56.824 ------------ 00:07:56.824 Flush (00h): Supported LBA-Change 00:07:56.824 Write (01h): Supported LBA-Change 00:07:56.824 Read (02h): Supported 00:07:56.824 Compare (05h): Supported 00:07:56.824 Write Zeroes (08h): Supported LBA-Change 00:07:56.824 Dataset Management (09h): Supported LBA-Change 00:07:56.824 Unknown (0Ch): Supported 00:07:56.824 Unknown (12h): Supported 00:07:56.824 Copy (19h): Supported LBA-Change 00:07:56.824 Unknown (1Dh): Supported LBA-Change 00:07:56.824 00:07:56.824 Error Log 00:07:56.824 ========= 00:07:56.824 00:07:56.824 Arbitration 00:07:56.824 =========== 00:07:56.824 Arbitration Burst: no limit 00:07:56.824 00:07:56.824 Power Management 00:07:56.824 ================ 00:07:56.824 Number of Power States: 1 00:07:56.824 Current Power State: Power State #0 00:07:56.824 Power State #0: 00:07:56.824 Max Power: 25.00 W 00:07:56.824 Non-Operational State: Operational 00:07:56.824 Entry Latency: 16 microseconds 00:07:56.824 Exit Latency: 4 microseconds 00:07:56.824 Relative Read Throughput: 0 00:07:56.824 Relative Read Latency: 0 00:07:56.824 Relative Write Throughput: 0 00:07:56.824 Relative Write Latency: 0 00:07:56.824 Idle Power: Not Reported 00:07:56.824 Active Power: Not Reported 00:07:56.824 Non-Operational Permissive Mode: Not Supported 00:07:56.824 00:07:56.824 Health Information 00:07:56.824 ================== 00:07:56.824 Critical Warnings: 00:07:56.824 Available Spare Space: OK 00:07:56.824 Temperature: OK 00:07:56.824 Device Reliability: OK 00:07:56.824 Read Only: No 00:07:56.824 Volatile Memory Backup: OK 00:07:56.824 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.824 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.824 Available Spare: 0% 00:07:56.824 Available Spare Threshold: 0% 00:07:56.824 Life Percentage Used: 0% 00:07:56.824 Data Units Read: 985 00:07:56.824 Data Units Written: 852 00:07:56.824 Host Read Commands: 54980 00:07:56.824 Host Write Commands: 53769 00:07:56.824 Controller Busy Time: 0 minutes 00:07:56.824 Power Cycles: 0 00:07:56.824 Power On Hours: 0 hours 00:07:56.824 Unsafe Shutdowns: 0 00:07:56.824 Unrecoverable Media Errors: 0 00:07:56.824 Lifetime Error Log Entries: 0 00:07:56.824 Warning Temperature Time: 0 minutes 00:07:56.824 Critical Temperature Time: 0 minutes 00:07:56.824 00:07:56.824 Number of Queues 00:07:56.824 ================ 00:07:56.824 Number of I/O Submission Queues: 64 00:07:56.824 Number of I/O Completion Queues: 64 00:07:56.824 00:07:56.824 ZNS Specific Controller Data 00:07:56.824 ============================ 00:07:56.824 Zone Append Size Limit: 0 00:07:56.824 00:07:56.824 00:07:56.824 Active Namespaces 00:07:56.824 ================= 00:07:56.824 Namespace ID:1 00:07:56.824 Error Recovery Timeout: Unlimited 00:07:56.824 Command Set Identifier: NVM (00h) 00:07:56.824 Deallocate: Supported 00:07:56.824 Deallocated/Unwritten Error: Supported 00:07:56.824 Deallocated Read Value: All 0x00 00:07:56.824 Deallocate in Write Zeroes: Not Supported 00:07:56.824 Deallocated Guard Field: 0xFFFF 00:07:56.824 Flush: Supported 00:07:56.824 Reservation: Not Supported 00:07:56.824 Namespace Sharing Capabilities: Private 00:07:56.824 Size (in LBAs): 1310720 (5GiB) 00:07:56.824 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.824 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.824 Thin Provisioning: Not Supported 00:07:56.824 Per-NS Atomic Units: No 00:07:56.824 Maximum Single Source Range Length: 128 00:07:56.824 Maximum Copy Length: 128 00:07:56.824 Maximum Source Range Count: 128 00:07:56.824 NGUID/EUI64 Never Reused: No 00:07:56.824 Namespace Write Protected: No 00:07:56.824 Number of LBA Formats: 8 00:07:56.824 Current LBA Format: LBA Format #04 00:07:56.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.824 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.824 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.824 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.824 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.824 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.824 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.824 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.824 00:07:56.824 NVM Specific Namespace Data 00:07:56.824 =========================== 00:07:56.824 Logical Block Storage Tag Mask: 0 00:07:56.824 Protection Information Capabilities: 00:07:56.824 16b Guard Protection Information Storage Tag Support: No 00:07:56.824 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.824 Storage Tag Check Read Support: No 00:07:56.824 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.824 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.824 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:57.083 ===================================================== 00:07:57.083 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.083 ===================================================== 00:07:57.083 Controller Capabilities/Features 00:07:57.083 ================================ 00:07:57.083 Vendor ID: 1b36 00:07:57.083 Subsystem Vendor ID: 1af4 00:07:57.083 Serial Number: 12342 00:07:57.083 Model Number: QEMU NVMe Ctrl 00:07:57.083 Firmware Version: 8.0.0 00:07:57.083 Recommended Arb Burst: 6 00:07:57.083 IEEE OUI Identifier: 00 54 52 00:07:57.083 Multi-path I/O 00:07:57.083 May have multiple subsystem ports: No 00:07:57.083 May have multiple controllers: No 00:07:57.083 Associated with SR-IOV VF: No 00:07:57.083 Max Data Transfer Size: 524288 00:07:57.083 Max Number of Namespaces: 256 00:07:57.083 Max Number of I/O Queues: 64 00:07:57.083 NVMe Specification Version (VS): 1.4 00:07:57.083 NVMe Specification Version (Identify): 1.4 00:07:57.083 Maximum Queue Entries: 2048 00:07:57.083 Contiguous Queues Required: Yes 00:07:57.083 Arbitration Mechanisms Supported 00:07:57.083 Weighted Round Robin: Not Supported 00:07:57.083 Vendor Specific: Not Supported 00:07:57.083 Reset Timeout: 7500 ms 00:07:57.083 Doorbell Stride: 4 bytes 00:07:57.083 NVM Subsystem Reset: Not Supported 00:07:57.083 Command Sets Supported 00:07:57.083 NVM Command Set: Supported 00:07:57.083 Boot Partition: Not Supported 00:07:57.083 Memory Page Size Minimum: 4096 bytes 00:07:57.083 Memory Page Size Maximum: 65536 bytes 00:07:57.083 Persistent Memory Region: Not Supported 00:07:57.083 Optional Asynchronous Events Supported 00:07:57.083 Namespace Attribute Notices: Supported 00:07:57.083 Firmware Activation Notices: Not Supported 00:07:57.083 ANA Change Notices: Not Supported 00:07:57.083 PLE Aggregate Log Change Notices: Not Supported 00:07:57.083 LBA Status Info Alert Notices: Not Supported 00:07:57.083 EGE Aggregate Log Change Notices: Not Supported 00:07:57.083 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.083 Zone Descriptor Change Notices: Not Supported 00:07:57.083 Discovery Log Change Notices: Not Supported 00:07:57.083 Controller Attributes 00:07:57.083 128-bit Host Identifier: Not Supported 00:07:57.083 Non-Operational Permissive Mode: Not Supported 00:07:57.083 NVM Sets: Not Supported 00:07:57.083 Read Recovery Levels: Not Supported 00:07:57.083 Endurance Groups: Not Supported 00:07:57.083 Predictable Latency Mode: Not Supported 00:07:57.083 Traffic Based Keep ALive: Not Supported 00:07:57.083 Namespace Granularity: Not Supported 00:07:57.083 SQ Associations: Not Supported 00:07:57.083 UUID List: Not Supported 00:07:57.083 Multi-Domain Subsystem: Not Supported 00:07:57.083 Fixed Capacity Management: Not Supported 00:07:57.083 Variable Capacity Management: Not Supported 00:07:57.083 Delete Endurance Group: Not Supported 00:07:57.083 Delete NVM Set: Not Supported 00:07:57.083 Extended LBA Formats Supported: Supported 00:07:57.083 Flexible Data Placement Supported: Not Supported 00:07:57.083 00:07:57.083 Controller Memory Buffer Support 00:07:57.083 ================================ 00:07:57.083 Supported: No 00:07:57.083 00:07:57.083 Persistent Memory Region Support 00:07:57.083 ================================ 00:07:57.083 Supported: No 00:07:57.083 00:07:57.083 Admin Command Set Attributes 00:07:57.084 ============================ 00:07:57.084 Security Send/Receive: Not Supported 00:07:57.084 Format NVM: Supported 00:07:57.084 Firmware Activate/Download: Not Supported 00:07:57.084 Namespace Management: Supported 00:07:57.084 Device Self-Test: Not Supported 00:07:57.084 Directives: Supported 00:07:57.084 NVMe-MI: Not Supported 00:07:57.084 Virtualization Management: Not Supported 00:07:57.084 Doorbell Buffer Config: Supported 00:07:57.084 Get LBA Status Capability: Not Supported 00:07:57.084 Command & Feature Lockdown Capability: Not Supported 00:07:57.084 Abort Command Limit: 4 00:07:57.084 Async Event Request Limit: 4 00:07:57.084 Number of Firmware Slots: N/A 00:07:57.084 Firmware Slot 1 Read-Only: N/A 00:07:57.084 Firmware Activation Without Reset: N/A 00:07:57.084 Multiple Update Detection Support: N/A 00:07:57.084 Firmware Update Granularity: No Information Provided 00:07:57.084 Per-Namespace SMART Log: Yes 00:07:57.084 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.084 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:57.084 Command Effects Log Page: Supported 00:07:57.084 Get Log Page Extended Data: Supported 00:07:57.084 Telemetry Log Pages: Not Supported 00:07:57.084 Persistent Event Log Pages: Not Supported 00:07:57.084 Supported Log Pages Log Page: May Support 00:07:57.084 Commands Supported & Effects Log Page: Not Supported 00:07:57.084 Feature Identifiers & Effects Log Page:May Support 00:07:57.084 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.084 Data Area 4 for Telemetry Log: Not Supported 00:07:57.084 Error Log Page Entries Supported: 1 00:07:57.084 Keep Alive: Not Supported 00:07:57.084 00:07:57.084 NVM Command Set Attributes 00:07:57.084 ========================== 00:07:57.084 Submission Queue Entry Size 00:07:57.084 Max: 64 00:07:57.084 Min: 64 00:07:57.084 Completion Queue Entry Size 00:07:57.084 Max: 16 00:07:57.084 Min: 16 00:07:57.084 Number of Namespaces: 256 00:07:57.084 Compare Command: Supported 00:07:57.084 Write Uncorrectable Command: Not Supported 00:07:57.084 Dataset Management Command: Supported 00:07:57.084 Write Zeroes Command: Supported 00:07:57.084 Set Features Save Field: Supported 00:07:57.084 Reservations: Not Supported 00:07:57.084 Timestamp: Supported 00:07:57.084 Copy: Supported 00:07:57.084 Volatile Write Cache: Present 00:07:57.084 Atomic Write Unit (Normal): 1 00:07:57.084 Atomic Write Unit (PFail): 1 00:07:57.084 Atomic Compare & Write Unit: 1 00:07:57.084 Fused Compare & Write: Not Supported 00:07:57.084 Scatter-Gather List 00:07:57.084 SGL Command Set: Supported 00:07:57.084 SGL Keyed: Not Supported 00:07:57.084 SGL Bit Bucket Descriptor: Not Supported 00:07:57.084 SGL Metadata Pointer: Not Supported 00:07:57.084 Oversized SGL: Not Supported 00:07:57.084 SGL Metadata Address: Not Supported 00:07:57.084 SGL Offset: Not Supported 00:07:57.084 Transport SGL Data Block: Not Supported 00:07:57.084 Replay Protected Memory Block: Not Supported 00:07:57.084 00:07:57.084 Firmware Slot Information 00:07:57.084 ========================= 00:07:57.084 Active slot: 1 00:07:57.084 Slot 1 Firmware Revision: 1.0 00:07:57.084 00:07:57.084 00:07:57.084 Commands Supported and Effects 00:07:57.084 ============================== 00:07:57.084 Admin Commands 00:07:57.084 -------------- 00:07:57.084 Delete I/O Submission Queue (00h): Supported 00:07:57.084 Create I/O Submission Queue (01h): Supported 00:07:57.084 Get Log Page (02h): Supported 00:07:57.084 Delete I/O Completion Queue (04h): Supported 00:07:57.084 Create I/O Completion Queue (05h): Supported 00:07:57.084 Identify (06h): Supported 00:07:57.084 Abort (08h): Supported 00:07:57.084 Set Features (09h): Supported 00:07:57.084 Get Features (0Ah): Supported 00:07:57.084 Asynchronous Event Request (0Ch): Supported 00:07:57.084 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.084 Directive Send (19h): Supported 00:07:57.084 Directive Receive (1Ah): Supported 00:07:57.084 Virtualization Management (1Ch): Supported 00:07:57.084 Doorbell Buffer Config (7Ch): Supported 00:07:57.084 Format NVM (80h): Supported LBA-Change 00:07:57.084 I/O Commands 00:07:57.084 ------------ 00:07:57.084 Flush (00h): Supported LBA-Change 00:07:57.084 Write (01h): Supported LBA-Change 00:07:57.084 Read (02h): Supported 00:07:57.084 Compare (05h): Supported 00:07:57.084 Write Zeroes (08h): Supported LBA-Change 00:07:57.084 Dataset Management (09h): Supported LBA-Change 00:07:57.084 Unknown (0Ch): Supported 00:07:57.084 Unknown (12h): Supported 00:07:57.084 Copy (19h): Supported LBA-Change 00:07:57.084 Unknown (1Dh): Supported LBA-Change 00:07:57.084 00:07:57.084 Error Log 00:07:57.084 ========= 00:07:57.084 00:07:57.084 Arbitration 00:07:57.084 =========== 00:07:57.084 Arbitration Burst: no limit 00:07:57.084 00:07:57.084 Power Management 00:07:57.084 ================ 00:07:57.084 Number of Power States: 1 00:07:57.084 Current Power State: Power State #0 00:07:57.084 Power State #0: 00:07:57.084 Max Power: 25.00 W 00:07:57.084 Non-Operational State: Operational 00:07:57.084 Entry Latency: 16 microseconds 00:07:57.084 Exit Latency: 4 microseconds 00:07:57.084 Relative Read Throughput: 0 00:07:57.084 Relative Read Latency: 0 00:07:57.084 Relative Write Throughput: 0 00:07:57.084 Relative Write Latency: 0 00:07:57.084 Idle Power: Not Reported 00:07:57.084 Active Power: Not Reported 00:07:57.084 Non-Operational Permissive Mode: Not Supported 00:07:57.084 00:07:57.084 Health Information 00:07:57.084 ================== 00:07:57.084 Critical Warnings: 00:07:57.084 Available Spare Space: OK 00:07:57.084 Temperature: OK 00:07:57.084 Device Reliability: OK 00:07:57.084 Read Only: No 00:07:57.084 Volatile Memory Backup: OK 00:07:57.084 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.084 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.084 Available Spare: 0% 00:07:57.084 Available Spare Threshold: 0% 00:07:57.084 Life Percentage Used: 0% 00:07:57.084 Data Units Read: 2239 00:07:57.084 Data Units Written: 2026 00:07:57.084 Host Read Commands: 111032 00:07:57.084 Host Write Commands: 109301 00:07:57.084 Controller Busy Time: 0 minutes 00:07:57.084 Power Cycles: 0 00:07:57.084 Power On Hours: 0 hours 00:07:57.084 Unsafe Shutdowns: 0 00:07:57.084 Unrecoverable Media Errors: 0 00:07:57.084 Lifetime Error Log Entries: 0 00:07:57.084 Warning Temperature Time: 0 minutes 00:07:57.084 Critical Temperature Time: 0 minutes 00:07:57.084 00:07:57.084 Number of Queues 00:07:57.084 ================ 00:07:57.084 Number of I/O Submission Queues: 64 00:07:57.084 Number of I/O Completion Queues: 64 00:07:57.084 00:07:57.084 ZNS Specific Controller Data 00:07:57.084 ============================ 00:07:57.084 Zone Append Size Limit: 0 00:07:57.084 00:07:57.084 00:07:57.084 Active Namespaces 00:07:57.084 ================= 00:07:57.084 Namespace ID:1 00:07:57.084 Error Recovery Timeout: Unlimited 00:07:57.084 Command Set Identifier: NVM (00h) 00:07:57.084 Deallocate: Supported 00:07:57.084 Deallocated/Unwritten Error: Supported 00:07:57.084 Deallocated Read Value: All 0x00 00:07:57.084 Deallocate in Write Zeroes: Not Supported 00:07:57.084 Deallocated Guard Field: 0xFFFF 00:07:57.084 Flush: Supported 00:07:57.084 Reservation: Not Supported 00:07:57.084 Namespace Sharing Capabilities: Private 00:07:57.084 Size (in LBAs): 1048576 (4GiB) 00:07:57.084 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.084 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.084 Thin Provisioning: Not Supported 00:07:57.084 Per-NS Atomic Units: No 00:07:57.084 Maximum Single Source Range Length: 128 00:07:57.084 Maximum Copy Length: 128 00:07:57.084 Maximum Source Range Count: 128 00:07:57.084 NGUID/EUI64 Never Reused: No 00:07:57.084 Namespace Write Protected: No 00:07:57.084 Number of LBA Formats: 8 00:07:57.084 Current LBA Format: LBA Format #04 00:07:57.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.084 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.084 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.084 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.084 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.084 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.084 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.084 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.084 00:07:57.084 NVM Specific Namespace Data 00:07:57.084 =========================== 00:07:57.084 Logical Block Storage Tag Mask: 0 00:07:57.084 Protection Information Capabilities: 00:07:57.084 16b Guard Protection Information Storage Tag Support: No 00:07:57.084 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.085 Storage Tag Check Read Support: No 00:07:57.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Namespace ID:2 00:07:57.085 Error Recovery Timeout: Unlimited 00:07:57.085 Command Set Identifier: NVM (00h) 00:07:57.085 Deallocate: Supported 00:07:57.085 Deallocated/Unwritten Error: Supported 00:07:57.085 Deallocated Read Value: All 0x00 00:07:57.085 Deallocate in Write Zeroes: Not Supported 00:07:57.085 Deallocated Guard Field: 0xFFFF 00:07:57.085 Flush: Supported 00:07:57.085 Reservation: Not Supported 00:07:57.085 Namespace Sharing Capabilities: Private 00:07:57.085 Size (in LBAs): 1048576 (4GiB) 00:07:57.085 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.085 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.085 Thin Provisioning: Not Supported 00:07:57.085 Per-NS Atomic Units: No 00:07:57.085 Maximum Single Source Range Length: 128 00:07:57.085 Maximum Copy Length: 128 00:07:57.085 Maximum Source Range Count: 128 00:07:57.085 NGUID/EUI64 Never Reused: No 00:07:57.085 Namespace Write Protected: No 00:07:57.085 Number of LBA Formats: 8 00:07:57.085 Current LBA Format: LBA Format #04 00:07:57.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.085 00:07:57.085 NVM Specific Namespace Data 00:07:57.085 =========================== 00:07:57.085 Logical Block Storage Tag Mask: 0 00:07:57.085 Protection Information Capabilities: 00:07:57.085 16b Guard Protection Information Storage Tag Support: No 00:07:57.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.085 Storage Tag Check Read Support: No 00:07:57.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Namespace ID:3 00:07:57.085 Error Recovery Timeout: Unlimited 00:07:57.085 Command Set Identifier: NVM (00h) 00:07:57.085 Deallocate: Supported 00:07:57.085 Deallocated/Unwritten Error: Supported 00:07:57.085 Deallocated Read Value: All 0x00 00:07:57.085 Deallocate in Write Zeroes: Not Supported 00:07:57.085 Deallocated Guard Field: 0xFFFF 00:07:57.085 Flush: Supported 00:07:57.085 Reservation: Not Supported 00:07:57.085 Namespace Sharing Capabilities: Private 00:07:57.085 Size (in LBAs): 1048576 (4GiB) 00:07:57.085 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.085 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.085 Thin Provisioning: Not Supported 00:07:57.085 Per-NS Atomic Units: No 00:07:57.085 Maximum Single Source Range Length: 128 00:07:57.085 Maximum Copy Length: 128 00:07:57.085 Maximum Source Range Count: 128 00:07:57.085 NGUID/EUI64 Never Reused: No 00:07:57.085 Namespace Write Protected: No 00:07:57.085 Number of LBA Formats: 8 00:07:57.085 Current LBA Format: LBA Format #04 00:07:57.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.085 00:07:57.085 NVM Specific Namespace Data 00:07:57.085 =========================== 00:07:57.085 Logical Block Storage Tag Mask: 0 00:07:57.085 Protection Information Capabilities: 00:07:57.085 16b Guard Protection Information Storage Tag Support: No 00:07:57.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.085 Storage Tag Check Read Support: No 00:07:57.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.085 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.085 19:06:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:57.344 ===================================================== 00:07:57.344 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.344 ===================================================== 00:07:57.344 Controller Capabilities/Features 00:07:57.344 ================================ 00:07:57.344 Vendor ID: 1b36 00:07:57.344 Subsystem Vendor ID: 1af4 00:07:57.344 Serial Number: 12343 00:07:57.344 Model Number: QEMU NVMe Ctrl 00:07:57.344 Firmware Version: 8.0.0 00:07:57.344 Recommended Arb Burst: 6 00:07:57.344 IEEE OUI Identifier: 00 54 52 00:07:57.344 Multi-path I/O 00:07:57.344 May have multiple subsystem ports: No 00:07:57.344 May have multiple controllers: Yes 00:07:57.344 Associated with SR-IOV VF: No 00:07:57.344 Max Data Transfer Size: 524288 00:07:57.344 Max Number of Namespaces: 256 00:07:57.344 Max Number of I/O Queues: 64 00:07:57.344 NVMe Specification Version (VS): 1.4 00:07:57.344 NVMe Specification Version (Identify): 1.4 00:07:57.344 Maximum Queue Entries: 2048 00:07:57.344 Contiguous Queues Required: Yes 00:07:57.344 Arbitration Mechanisms Supported 00:07:57.344 Weighted Round Robin: Not Supported 00:07:57.344 Vendor Specific: Not Supported 00:07:57.344 Reset Timeout: 7500 ms 00:07:57.344 Doorbell Stride: 4 bytes 00:07:57.344 NVM Subsystem Reset: Not Supported 00:07:57.344 Command Sets Supported 00:07:57.344 NVM Command Set: Supported 00:07:57.344 Boot Partition: Not Supported 00:07:57.344 Memory Page Size Minimum: 4096 bytes 00:07:57.344 Memory Page Size Maximum: 65536 bytes 00:07:57.344 Persistent Memory Region: Not Supported 00:07:57.344 Optional Asynchronous Events Supported 00:07:57.344 Namespace Attribute Notices: Supported 00:07:57.344 Firmware Activation Notices: Not Supported 00:07:57.344 ANA Change Notices: Not Supported 00:07:57.344 PLE Aggregate Log Change Notices: Not Supported 00:07:57.344 LBA Status Info Alert Notices: Not Supported 00:07:57.344 EGE Aggregate Log Change Notices: Not Supported 00:07:57.344 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.344 Zone Descriptor Change Notices: Not Supported 00:07:57.344 Discovery Log Change Notices: Not Supported 00:07:57.344 Controller Attributes 00:07:57.344 128-bit Host Identifier: Not Supported 00:07:57.344 Non-Operational Permissive Mode: Not Supported 00:07:57.344 NVM Sets: Not Supported 00:07:57.344 Read Recovery Levels: Not Supported 00:07:57.344 Endurance Groups: Supported 00:07:57.344 Predictable Latency Mode: Not Supported 00:07:57.344 Traffic Based Keep ALive: Not Supported 00:07:57.344 Namespace Granularity: Not Supported 00:07:57.344 SQ Associations: Not Supported 00:07:57.344 UUID List: Not Supported 00:07:57.344 Multi-Domain Subsystem: Not Supported 00:07:57.344 Fixed Capacity Management: Not Supported 00:07:57.344 Variable Capacity Management: Not Supported 00:07:57.344 Delete Endurance Group: Not Supported 00:07:57.344 Delete NVM Set: Not Supported 00:07:57.344 Extended LBA Formats Supported: Supported 00:07:57.344 Flexible Data Placement Supported: Supported 00:07:57.344 00:07:57.344 Controller Memory Buffer Support 00:07:57.344 ================================ 00:07:57.344 Supported: No 00:07:57.344 00:07:57.344 Persistent Memory Region Support 00:07:57.344 ================================ 00:07:57.344 Supported: No 00:07:57.344 00:07:57.344 Admin Command Set Attributes 00:07:57.344 ============================ 00:07:57.344 Security Send/Receive: Not Supported 00:07:57.344 Format NVM: Supported 00:07:57.344 Firmware Activate/Download: Not Supported 00:07:57.344 Namespace Management: Supported 00:07:57.344 Device Self-Test: Not Supported 00:07:57.344 Directives: Supported 00:07:57.344 NVMe-MI: Not Supported 00:07:57.344 Virtualization Management: Not Supported 00:07:57.344 Doorbell Buffer Config: Supported 00:07:57.344 Get LBA Status Capability: Not Supported 00:07:57.344 Command & Feature Lockdown Capability: Not Supported 00:07:57.344 Abort Command Limit: 4 00:07:57.344 Async Event Request Limit: 4 00:07:57.344 Number of Firmware Slots: N/A 00:07:57.344 Firmware Slot 1 Read-Only: N/A 00:07:57.344 Firmware Activation Without Reset: N/A 00:07:57.344 Multiple Update Detection Support: N/A 00:07:57.344 Firmware Update Granularity: No Information Provided 00:07:57.344 Per-Namespace SMART Log: Yes 00:07:57.344 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.344 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:57.344 Command Effects Log Page: Supported 00:07:57.344 Get Log Page Extended Data: Supported 00:07:57.344 Telemetry Log Pages: Not Supported 00:07:57.344 Persistent Event Log Pages: Not Supported 00:07:57.344 Supported Log Pages Log Page: May Support 00:07:57.344 Commands Supported & Effects Log Page: Not Supported 00:07:57.344 Feature Identifiers & Effects Log Page:May Support 00:07:57.344 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.344 Data Area 4 for Telemetry Log: Not Supported 00:07:57.344 Error Log Page Entries Supported: 1 00:07:57.344 Keep Alive: Not Supported 00:07:57.344 00:07:57.344 NVM Command Set Attributes 00:07:57.344 ========================== 00:07:57.344 Submission Queue Entry Size 00:07:57.344 Max: 64 00:07:57.344 Min: 64 00:07:57.344 Completion Queue Entry Size 00:07:57.344 Max: 16 00:07:57.344 Min: 16 00:07:57.344 Number of Namespaces: 256 00:07:57.344 Compare Command: Supported 00:07:57.344 Write Uncorrectable Command: Not Supported 00:07:57.344 Dataset Management Command: Supported 00:07:57.344 Write Zeroes Command: Supported 00:07:57.344 Set Features Save Field: Supported 00:07:57.344 Reservations: Not Supported 00:07:57.344 Timestamp: Supported 00:07:57.344 Copy: Supported 00:07:57.344 Volatile Write Cache: Present 00:07:57.344 Atomic Write Unit (Normal): 1 00:07:57.344 Atomic Write Unit (PFail): 1 00:07:57.344 Atomic Compare & Write Unit: 1 00:07:57.344 Fused Compare & Write: Not Supported 00:07:57.344 Scatter-Gather List 00:07:57.344 SGL Command Set: Supported 00:07:57.344 SGL Keyed: Not Supported 00:07:57.344 SGL Bit Bucket Descriptor: Not Supported 00:07:57.344 SGL Metadata Pointer: Not Supported 00:07:57.344 Oversized SGL: Not Supported 00:07:57.344 SGL Metadata Address: Not Supported 00:07:57.344 SGL Offset: Not Supported 00:07:57.344 Transport SGL Data Block: Not Supported 00:07:57.344 Replay Protected Memory Block: Not Supported 00:07:57.344 00:07:57.344 Firmware Slot Information 00:07:57.344 ========================= 00:07:57.344 Active slot: 1 00:07:57.344 Slot 1 Firmware Revision: 1.0 00:07:57.344 00:07:57.344 00:07:57.344 Commands Supported and Effects 00:07:57.344 ============================== 00:07:57.344 Admin Commands 00:07:57.344 -------------- 00:07:57.344 Delete I/O Submission Queue (00h): Supported 00:07:57.344 Create I/O Submission Queue (01h): Supported 00:07:57.345 Get Log Page (02h): Supported 00:07:57.345 Delete I/O Completion Queue (04h): Supported 00:07:57.345 Create I/O Completion Queue (05h): Supported 00:07:57.345 Identify (06h): Supported 00:07:57.345 Abort (08h): Supported 00:07:57.345 Set Features (09h): Supported 00:07:57.345 Get Features (0Ah): Supported 00:07:57.345 Asynchronous Event Request (0Ch): Supported 00:07:57.345 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.345 Directive Send (19h): Supported 00:07:57.345 Directive Receive (1Ah): Supported 00:07:57.345 Virtualization Management (1Ch): Supported 00:07:57.345 Doorbell Buffer Config (7Ch): Supported 00:07:57.345 Format NVM (80h): Supported LBA-Change 00:07:57.345 I/O Commands 00:07:57.345 ------------ 00:07:57.345 Flush (00h): Supported LBA-Change 00:07:57.345 Write (01h): Supported LBA-Change 00:07:57.345 Read (02h): Supported 00:07:57.345 Compare (05h): Supported 00:07:57.345 Write Zeroes (08h): Supported LBA-Change 00:07:57.345 Dataset Management (09h): Supported LBA-Change 00:07:57.345 Unknown (0Ch): Supported 00:07:57.345 Unknown (12h): Supported 00:07:57.345 Copy (19h): Supported LBA-Change 00:07:57.345 Unknown (1Dh): Supported LBA-Change 00:07:57.345 00:07:57.345 Error Log 00:07:57.345 ========= 00:07:57.345 00:07:57.345 Arbitration 00:07:57.345 =========== 00:07:57.345 Arbitration Burst: no limit 00:07:57.345 00:07:57.345 Power Management 00:07:57.345 ================ 00:07:57.345 Number of Power States: 1 00:07:57.345 Current Power State: Power State #0 00:07:57.345 Power State #0: 00:07:57.345 Max Power: 25.00 W 00:07:57.345 Non-Operational State: Operational 00:07:57.345 Entry Latency: 16 microseconds 00:07:57.345 Exit Latency: 4 microseconds 00:07:57.345 Relative Read Throughput: 0 00:07:57.345 Relative Read Latency: 0 00:07:57.345 Relative Write Throughput: 0 00:07:57.345 Relative Write Latency: 0 00:07:57.345 Idle Power: Not Reported 00:07:57.345 Active Power: Not Reported 00:07:57.345 Non-Operational Permissive Mode: Not Supported 00:07:57.345 00:07:57.345 Health Information 00:07:57.345 ================== 00:07:57.345 Critical Warnings: 00:07:57.345 Available Spare Space: OK 00:07:57.345 Temperature: OK 00:07:57.345 Device Reliability: OK 00:07:57.345 Read Only: No 00:07:57.345 Volatile Memory Backup: OK 00:07:57.345 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.345 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.345 Available Spare: 0% 00:07:57.345 Available Spare Threshold: 0% 00:07:57.345 Life Percentage Used: 0% 00:07:57.345 Data Units Read: 1031 00:07:57.345 Data Units Written: 960 00:07:57.345 Host Read Commands: 39331 00:07:57.345 Host Write Commands: 38755 00:07:57.345 Controller Busy Time: 0 minutes 00:07:57.345 Power Cycles: 0 00:07:57.345 Power On Hours: 0 hours 00:07:57.345 Unsafe Shutdowns: 0 00:07:57.345 Unrecoverable Media Errors: 0 00:07:57.345 Lifetime Error Log Entries: 0 00:07:57.345 Warning Temperature Time: 0 minutes 00:07:57.345 Critical Temperature Time: 0 minutes 00:07:57.345 00:07:57.345 Number of Queues 00:07:57.345 ================ 00:07:57.345 Number of I/O Submission Queues: 64 00:07:57.345 Number of I/O Completion Queues: 64 00:07:57.345 00:07:57.345 ZNS Specific Controller Data 00:07:57.345 ============================ 00:07:57.345 Zone Append Size Limit: 0 00:07:57.345 00:07:57.345 00:07:57.345 Active Namespaces 00:07:57.345 ================= 00:07:57.345 Namespace ID:1 00:07:57.345 Error Recovery Timeout: Unlimited 00:07:57.345 Command Set Identifier: NVM (00h) 00:07:57.345 Deallocate: Supported 00:07:57.345 Deallocated/Unwritten Error: Supported 00:07:57.345 Deallocated Read Value: All 0x00 00:07:57.345 Deallocate in Write Zeroes: Not Supported 00:07:57.345 Deallocated Guard Field: 0xFFFF 00:07:57.345 Flush: Supported 00:07:57.345 Reservation: Not Supported 00:07:57.345 Namespace Sharing Capabilities: Multiple Controllers 00:07:57.345 Size (in LBAs): 262144 (1GiB) 00:07:57.345 Capacity (in LBAs): 262144 (1GiB) 00:07:57.345 Utilization (in LBAs): 262144 (1GiB) 00:07:57.345 Thin Provisioning: Not Supported 00:07:57.345 Per-NS Atomic Units: No 00:07:57.345 Maximum Single Source Range Length: 128 00:07:57.345 Maximum Copy Length: 128 00:07:57.345 Maximum Source Range Count: 128 00:07:57.345 NGUID/EUI64 Never Reused: No 00:07:57.345 Namespace Write Protected: No 00:07:57.345 Endurance group ID: 1 00:07:57.345 Number of LBA Formats: 8 00:07:57.345 Current LBA Format: LBA Format #04 00:07:57.345 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.345 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.345 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.345 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.345 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.345 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.345 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.345 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.345 00:07:57.345 Get Feature FDP: 00:07:57.345 ================ 00:07:57.345 Enabled: Yes 00:07:57.345 FDP configuration index: 0 00:07:57.345 00:07:57.345 FDP configurations log page 00:07:57.345 =========================== 00:07:57.345 Number of FDP configurations: 1 00:07:57.345 Version: 0 00:07:57.345 Size: 112 00:07:57.345 FDP Configuration Descriptor: 0 00:07:57.345 Descriptor Size: 96 00:07:57.345 Reclaim Group Identifier format: 2 00:07:57.345 FDP Volatile Write Cache: Not Present 00:07:57.345 FDP Configuration: Valid 00:07:57.345 Vendor Specific Size: 0 00:07:57.345 Number of Reclaim Groups: 2 00:07:57.345 Number of Recalim Unit Handles: 8 00:07:57.345 Max Placement Identifiers: 128 00:07:57.345 Number of Namespaces Suppprted: 256 00:07:57.345 Reclaim unit Nominal Size: 6000000 bytes 00:07:57.345 Estimated Reclaim Unit Time Limit: Not Reported 00:07:57.345 RUH Desc #000: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #001: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #002: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #003: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #004: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #005: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #006: RUH Type: Initially Isolated 00:07:57.345 RUH Desc #007: RUH Type: Initially Isolated 00:07:57.345 00:07:57.345 FDP reclaim unit handle usage log page 00:07:57.345 ====================================== 00:07:57.345 Number of Reclaim Unit Handles: 8 00:07:57.345 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:57.345 RUH Usage Desc #001: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #002: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #003: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #004: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #005: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #006: RUH Attributes: Unused 00:07:57.345 RUH Usage Desc #007: RUH Attributes: Unused 00:07:57.345 00:07:57.345 FDP statistics log page 00:07:57.345 ======================= 00:07:57.345 Host bytes with metadata written: 582459392 00:07:57.345 Media bytes with metadata written: 582537216 00:07:57.345 Media bytes erased: 0 00:07:57.345 00:07:57.345 FDP events log page 00:07:57.345 =================== 00:07:57.345 Number of FDP events: 0 00:07:57.345 00:07:57.345 NVM Specific Namespace Data 00:07:57.345 =========================== 00:07:57.345 Logical Block Storage Tag Mask: 0 00:07:57.345 Protection Information Capabilities: 00:07:57.345 16b Guard Protection Information Storage Tag Support: No 00:07:57.345 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.345 Storage Tag Check Read Support: No 00:07:57.345 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.345 00:07:57.345 real 0m1.211s 00:07:57.345 user 0m0.443s 00:07:57.345 sys 0m0.545s 00:07:57.345 19:06:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.345 19:06:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:57.345 ************************************ 00:07:57.345 END TEST nvme_identify 00:07:57.345 ************************************ 00:07:57.346 19:06:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:57.346 19:06:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.346 19:06:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.346 19:06:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.346 ************************************ 00:07:57.346 START TEST nvme_perf 00:07:57.346 ************************************ 00:07:57.346 19:06:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:57.346 19:06:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:58.770 Initializing NVMe Controllers 00:07:58.770 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.770 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.770 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.770 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.770 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:58.770 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:58.770 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:58.770 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:58.770 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:58.770 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:58.770 Initialization complete. Launching workers. 00:07:58.770 ======================================================== 00:07:58.770 Latency(us) 00:07:58.770 Device Information : IOPS MiB/s Average min max 00:07:58.770 PCIE (0000:00:10.0) NSID 1 from core 0: 17729.42 207.77 7228.22 6059.39 32265.60 00:07:58.770 PCIE (0000:00:11.0) NSID 1 from core 0: 17729.42 207.77 7217.36 6169.24 30711.95 00:07:58.770 PCIE (0000:00:13.0) NSID 1 from core 0: 17729.42 207.77 7204.95 6157.83 28971.95 00:07:58.770 PCIE (0000:00:12.0) NSID 1 from core 0: 17729.42 207.77 7191.95 6148.28 26928.85 00:07:58.770 PCIE (0000:00:12.0) NSID 2 from core 0: 17729.42 207.77 7179.41 6177.66 25063.60 00:07:58.770 PCIE (0000:00:12.0) NSID 3 from core 0: 17729.42 207.77 7166.84 6178.83 23127.27 00:07:58.770 ======================================================== 00:07:58.770 Total : 106376.49 1246.60 7198.12 6059.39 32265.60 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.770 ================================================================================= 00:07:58.770 1.00000% : 6225.920us 00:07:58.770 10.00000% : 6427.569us 00:07:58.770 25.00000% : 6654.425us 00:07:58.770 50.00000% : 6956.898us 00:07:58.770 75.00000% : 7259.372us 00:07:58.770 90.00000% : 7662.671us 00:07:58.770 95.00000% : 8922.978us 00:07:58.770 98.00000% : 10536.172us 00:07:58.770 99.00000% : 13409.674us 00:07:58.770 99.50000% : 24802.855us 00:07:58.770 99.90000% : 31658.929us 00:07:58.770 99.99000% : 32263.877us 00:07:58.770 99.99900% : 32465.526us 00:07:58.770 99.99990% : 32465.526us 00:07:58.770 99.99999% : 32465.526us 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.770 ================================================================================= 00:07:58.770 1.00000% : 6301.538us 00:07:58.770 10.00000% : 6503.188us 00:07:58.770 25.00000% : 6704.837us 00:07:58.770 50.00000% : 6956.898us 00:07:58.770 75.00000% : 7208.960us 00:07:58.770 90.00000% : 7612.258us 00:07:58.770 95.00000% : 8872.566us 00:07:58.770 98.00000% : 10586.585us 00:07:58.770 99.00000% : 13409.674us 00:07:58.770 99.50000% : 23391.311us 00:07:58.770 99.90000% : 30247.385us 00:07:58.770 99.99000% : 30852.332us 00:07:58.770 99.99900% : 30852.332us 00:07:58.770 99.99990% : 30852.332us 00:07:58.770 99.99999% : 30852.332us 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.770 ================================================================================= 00:07:58.770 1.00000% : 6301.538us 00:07:58.770 10.00000% : 6503.188us 00:07:58.770 25.00000% : 6654.425us 00:07:58.770 50.00000% : 6956.898us 00:07:58.770 75.00000% : 7208.960us 00:07:58.770 90.00000% : 7612.258us 00:07:58.770 95.00000% : 8922.978us 00:07:58.770 98.00000% : 10435.348us 00:07:58.770 99.00000% : 13006.375us 00:07:58.770 99.50000% : 21878.942us 00:07:58.770 99.90000% : 28634.191us 00:07:58.770 99.99000% : 29037.489us 00:07:58.770 99.99900% : 29037.489us 00:07:58.770 99.99990% : 29037.489us 00:07:58.770 99.99999% : 29037.489us 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.770 ================================================================================= 00:07:58.770 1.00000% : 6301.538us 00:07:58.770 10.00000% : 6503.188us 00:07:58.770 25.00000% : 6704.837us 00:07:58.770 50.00000% : 6956.898us 00:07:58.770 75.00000% : 7208.960us 00:07:58.770 90.00000% : 7612.258us 00:07:58.770 95.00000% : 8771.742us 00:07:58.770 98.00000% : 10284.111us 00:07:58.770 99.00000% : 12754.314us 00:07:58.770 99.50000% : 20265.748us 00:07:58.770 99.90000% : 26617.698us 00:07:58.770 99.99000% : 27020.997us 00:07:58.770 99.99900% : 27020.997us 00:07:58.770 99.99990% : 27020.997us 00:07:58.770 99.99999% : 27020.997us 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.770 ================================================================================= 00:07:58.770 1.00000% : 6301.538us 00:07:58.770 10.00000% : 6503.188us 00:07:58.770 25.00000% : 6704.837us 00:07:58.770 50.00000% : 6956.898us 00:07:58.770 75.00000% : 7259.372us 00:07:58.770 90.00000% : 7612.258us 00:07:58.770 95.00000% : 8771.742us 00:07:58.770 98.00000% : 10233.698us 00:07:58.770 99.00000% : 12703.902us 00:07:58.770 99.50000% : 18652.554us 00:07:58.770 99.90000% : 24702.031us 00:07:58.770 99.99000% : 25105.329us 00:07:58.770 99.99900% : 25105.329us 00:07:58.770 99.99990% : 25105.329us 00:07:58.770 99.99999% : 25105.329us 00:07:58.770 00:07:58.770 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.771 ================================================================================= 00:07:58.771 1.00000% : 6301.538us 00:07:58.771 10.00000% : 6503.188us 00:07:58.771 25.00000% : 6704.837us 00:07:58.771 50.00000% : 6956.898us 00:07:58.771 75.00000% : 7208.960us 00:07:58.771 90.00000% : 7612.258us 00:07:58.771 95.00000% : 8771.742us 00:07:58.771 98.00000% : 10334.523us 00:07:58.771 99.00000% : 13308.849us 00:07:58.771 99.50000% : 16938.535us 00:07:58.771 99.90000% : 22685.538us 00:07:58.771 99.99000% : 23189.662us 00:07:58.771 99.99900% : 23189.662us 00:07:58.771 99.99990% : 23189.662us 00:07:58.771 99.99999% : 23189.662us 00:07:58.771 00:07:58.771 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.771 ============================================================================== 00:07:58.771 Range in us Cumulative IO count 00:07:58.771 6049.477 - 6074.683: 0.0112% ( 2) 00:07:58.771 6074.683 - 6099.889: 0.0169% ( 1) 00:07:58.771 6099.889 - 6125.095: 0.0955% ( 14) 00:07:58.771 6125.095 - 6150.302: 0.2192% ( 22) 00:07:58.771 6150.302 - 6175.508: 0.4834% ( 47) 00:07:58.771 6175.508 - 6200.714: 0.8375% ( 63) 00:07:58.771 6200.714 - 6225.920: 1.5007% ( 118) 00:07:58.771 6225.920 - 6251.126: 2.2313% ( 130) 00:07:58.771 6251.126 - 6276.332: 3.2374% ( 179) 00:07:58.771 6276.332 - 6301.538: 4.3278% ( 194) 00:07:58.771 6301.538 - 6326.745: 5.4688% ( 203) 00:07:58.771 6326.745 - 6351.951: 6.8907% ( 253) 00:07:58.771 6351.951 - 6377.157: 8.2340% ( 239) 00:07:58.771 6377.157 - 6402.363: 9.6616% ( 254) 00:07:58.771 6402.363 - 6427.569: 11.1960% ( 273) 00:07:58.771 6427.569 - 6452.775: 12.7417% ( 275) 00:07:58.771 6452.775 - 6503.188: 15.9116% ( 564) 00:07:58.771 6503.188 - 6553.600: 19.4526% ( 630) 00:07:58.771 6553.600 - 6604.012: 23.1509% ( 658) 00:07:58.771 6604.012 - 6654.425: 26.9391% ( 674) 00:07:58.771 6654.425 - 6704.837: 30.8622% ( 698) 00:07:58.771 6704.837 - 6755.249: 34.7460% ( 691) 00:07:58.771 6755.249 - 6805.662: 38.9107% ( 741) 00:07:58.771 6805.662 - 6856.074: 42.7158% ( 677) 00:07:58.771 6856.074 - 6906.486: 46.9649% ( 756) 00:07:58.771 6906.486 - 6956.898: 50.9555% ( 710) 00:07:58.771 6956.898 - 7007.311: 55.2327% ( 761) 00:07:58.771 7007.311 - 7057.723: 59.3019% ( 724) 00:07:58.771 7057.723 - 7108.135: 63.5004% ( 747) 00:07:58.771 7108.135 - 7158.548: 67.7158% ( 750) 00:07:58.771 7158.548 - 7208.960: 71.7176% ( 712) 00:07:58.771 7208.960 - 7259.372: 75.5339% ( 679) 00:07:58.771 7259.372 - 7309.785: 79.0580% ( 627) 00:07:58.771 7309.785 - 7360.197: 81.9919% ( 522) 00:07:58.771 7360.197 - 7410.609: 84.2570% ( 403) 00:07:58.771 7410.609 - 7461.022: 86.0555% ( 320) 00:07:58.771 7461.022 - 7511.434: 87.4944% ( 256) 00:07:58.771 7511.434 - 7561.846: 88.7983% ( 232) 00:07:58.771 7561.846 - 7612.258: 89.8156% ( 181) 00:07:58.771 7612.258 - 7662.671: 90.6025% ( 140) 00:07:58.771 7662.671 - 7713.083: 91.1927% ( 105) 00:07:58.771 7713.083 - 7763.495: 91.6367% ( 79) 00:07:58.771 7763.495 - 7813.908: 92.0582% ( 75) 00:07:58.771 7813.908 - 7864.320: 92.4011% ( 61) 00:07:58.771 7864.320 - 7914.732: 92.6652% ( 47) 00:07:58.771 7914.732 - 7965.145: 92.8676% ( 36) 00:07:58.771 7965.145 - 8015.557: 93.0081% ( 25) 00:07:58.771 8015.557 - 8065.969: 93.1317% ( 22) 00:07:58.771 8065.969 - 8116.382: 93.2385% ( 19) 00:07:58.771 8116.382 - 8166.794: 93.3734% ( 24) 00:07:58.771 8166.794 - 8217.206: 93.4971% ( 22) 00:07:58.771 8217.206 - 8267.618: 93.6095% ( 20) 00:07:58.771 8267.618 - 8318.031: 93.6938% ( 15) 00:07:58.771 8318.031 - 8368.443: 93.7893% ( 17) 00:07:58.771 8368.443 - 8418.855: 93.8793% ( 16) 00:07:58.771 8418.855 - 8469.268: 94.0085% ( 23) 00:07:58.771 8469.268 - 8519.680: 94.1266% ( 21) 00:07:58.771 8519.680 - 8570.092: 94.2446% ( 21) 00:07:58.771 8570.092 - 8620.505: 94.3626% ( 21) 00:07:58.771 8620.505 - 8670.917: 94.4975% ( 24) 00:07:58.771 8670.917 - 8721.329: 94.6324% ( 24) 00:07:58.771 8721.329 - 8771.742: 94.7448% ( 20) 00:07:58.771 8771.742 - 8822.154: 94.8629% ( 21) 00:07:58.771 8822.154 - 8872.566: 94.9696% ( 19) 00:07:58.771 8872.566 - 8922.978: 95.0708% ( 18) 00:07:58.771 8922.978 - 8973.391: 95.2001% ( 23) 00:07:58.771 8973.391 - 9023.803: 95.3350% ( 24) 00:07:58.771 9023.803 - 9074.215: 95.4867% ( 27) 00:07:58.771 9074.215 - 9124.628: 95.6048% ( 21) 00:07:58.771 9124.628 - 9175.040: 95.7228% ( 21) 00:07:58.771 9175.040 - 9225.452: 95.8464% ( 22) 00:07:58.771 9225.452 - 9275.865: 95.9645% ( 21) 00:07:58.771 9275.865 - 9326.277: 96.0881% ( 22) 00:07:58.771 9326.277 - 9376.689: 96.1781% ( 16) 00:07:58.771 9376.689 - 9427.102: 96.2905% ( 20) 00:07:58.771 9427.102 - 9477.514: 96.4254% ( 24) 00:07:58.771 9477.514 - 9527.926: 96.5603% ( 24) 00:07:58.771 9527.926 - 9578.338: 96.6727% ( 20) 00:07:58.771 9578.338 - 9628.751: 96.7626% ( 16) 00:07:58.771 9628.751 - 9679.163: 96.8694% ( 19) 00:07:58.771 9679.163 - 9729.575: 96.9593% ( 16) 00:07:58.771 9729.575 - 9779.988: 97.0661% ( 19) 00:07:58.771 9779.988 - 9830.400: 97.1560% ( 16) 00:07:58.771 9830.400 - 9880.812: 97.2403% ( 15) 00:07:58.771 9880.812 - 9931.225: 97.3471% ( 19) 00:07:58.771 9931.225 - 9981.637: 97.3752% ( 5) 00:07:58.771 9981.637 - 10032.049: 97.4764% ( 18) 00:07:58.771 10032.049 - 10082.462: 97.5270% ( 9) 00:07:58.771 10082.462 - 10132.874: 97.5832% ( 10) 00:07:58.771 10132.874 - 10183.286: 97.6506% ( 12) 00:07:58.771 10183.286 - 10233.698: 97.7125% ( 11) 00:07:58.771 10233.698 - 10284.111: 97.7687% ( 10) 00:07:58.771 10284.111 - 10334.523: 97.8080% ( 7) 00:07:58.771 10334.523 - 10384.935: 97.8698% ( 11) 00:07:58.771 10384.935 - 10435.348: 97.9260% ( 10) 00:07:58.771 10435.348 - 10485.760: 97.9710% ( 8) 00:07:58.771 10485.760 - 10536.172: 98.0272% ( 10) 00:07:58.771 10536.172 - 10586.585: 98.0778% ( 9) 00:07:58.771 10586.585 - 10636.997: 98.1452% ( 12) 00:07:58.771 10636.997 - 10687.409: 98.1958% ( 9) 00:07:58.771 10687.409 - 10737.822: 98.2576% ( 11) 00:07:58.771 10737.822 - 10788.234: 98.3251% ( 12) 00:07:58.771 10788.234 - 10838.646: 98.3701% ( 8) 00:07:58.771 10838.646 - 10889.058: 98.4431% ( 13) 00:07:58.771 10889.058 - 10939.471: 98.4881% ( 8) 00:07:58.771 10939.471 - 10989.883: 98.5499% ( 11) 00:07:58.771 10989.883 - 11040.295: 98.5893% ( 7) 00:07:58.771 11040.295 - 11090.708: 98.6230% ( 6) 00:07:58.771 11090.708 - 11141.120: 98.6679% ( 8) 00:07:58.771 11141.120 - 11191.532: 98.7129% ( 8) 00:07:58.771 11191.532 - 11241.945: 98.7635% ( 9) 00:07:58.771 11241.945 - 11292.357: 98.8085% ( 8) 00:07:58.771 11292.357 - 11342.769: 98.8309% ( 4) 00:07:58.771 11342.769 - 11393.182: 98.8478% ( 3) 00:07:58.771 11393.182 - 11443.594: 98.8647% ( 3) 00:07:58.771 11443.594 - 11494.006: 98.8871% ( 4) 00:07:58.771 11494.006 - 11544.418: 98.8984% ( 2) 00:07:58.771 11544.418 - 11594.831: 98.9040% ( 1) 00:07:58.771 11594.831 - 11645.243: 98.9209% ( 3) 00:07:58.771 13107.200 - 13208.025: 98.9490% ( 5) 00:07:58.771 13208.025 - 13308.849: 98.9883% ( 7) 00:07:58.771 13308.849 - 13409.674: 99.0108% ( 4) 00:07:58.771 13409.674 - 13510.498: 99.0445% ( 6) 00:07:58.771 13510.498 - 13611.323: 99.0782% ( 6) 00:07:58.771 13611.323 - 13712.148: 99.1120% ( 6) 00:07:58.771 13712.148 - 13812.972: 99.1457% ( 6) 00:07:58.771 13812.972 - 13913.797: 99.1794% ( 6) 00:07:58.771 13913.797 - 14014.622: 99.2131% ( 6) 00:07:58.771 14014.622 - 14115.446: 99.2469% ( 6) 00:07:58.771 14115.446 - 14216.271: 99.2806% ( 6) 00:07:58.771 23290.486 - 23391.311: 99.2862% ( 1) 00:07:58.771 23391.311 - 23492.135: 99.2974% ( 2) 00:07:58.771 23492.135 - 23592.960: 99.3087% ( 2) 00:07:58.771 23592.960 - 23693.785: 99.3312% ( 4) 00:07:58.771 23693.785 - 23794.609: 99.3480% ( 3) 00:07:58.771 23794.609 - 23895.434: 99.3649% ( 3) 00:07:58.771 23895.434 - 23996.258: 99.3761% ( 2) 00:07:58.771 23996.258 - 24097.083: 99.3874% ( 2) 00:07:58.771 24097.083 - 24197.908: 99.4042% ( 3) 00:07:58.771 24197.908 - 24298.732: 99.4267% ( 4) 00:07:58.771 24298.732 - 24399.557: 99.4436% ( 3) 00:07:58.771 24399.557 - 24500.382: 99.4604% ( 3) 00:07:58.771 24500.382 - 24601.206: 99.4717% ( 2) 00:07:58.771 24601.206 - 24702.031: 99.4885% ( 3) 00:07:58.771 24702.031 - 24802.855: 99.5054% ( 3) 00:07:58.771 24802.855 - 24903.680: 99.5223% ( 3) 00:07:58.771 24903.680 - 25004.505: 99.5391% ( 3) 00:07:58.771 25004.505 - 25105.329: 99.5616% ( 4) 00:07:58.771 25105.329 - 25206.154: 99.5728% ( 2) 00:07:58.771 25206.154 - 25306.978: 99.5897% ( 3) 00:07:58.771 25306.978 - 25407.803: 99.6066% ( 3) 00:07:58.771 25407.803 - 25508.628: 99.6178% ( 2) 00:07:58.771 25508.628 - 25609.452: 99.6403% ( 4) 00:07:58.771 30045.735 - 30247.385: 99.6740% ( 6) 00:07:58.771 30247.385 - 30449.034: 99.7077% ( 6) 00:07:58.771 30449.034 - 30650.683: 99.7358% ( 5) 00:07:58.771 30650.683 - 30852.332: 99.7696% ( 6) 00:07:58.771 30852.332 - 31053.982: 99.8033% ( 6) 00:07:58.771 31053.982 - 31255.631: 99.8426% ( 7) 00:07:58.771 31255.631 - 31457.280: 99.8707% ( 5) 00:07:58.771 31457.280 - 31658.929: 99.9045% ( 6) 00:07:58.771 31658.929 - 31860.578: 99.9382% ( 6) 00:07:58.771 31860.578 - 32062.228: 99.9719% ( 6) 00:07:58.771 32062.228 - 32263.877: 99.9944% ( 4) 00:07:58.771 32263.877 - 32465.526: 100.0000% ( 1) 00:07:58.771 00:07:58.771 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.771 ============================================================================== 00:07:58.771 Range in us Cumulative IO count 00:07:58.772 6150.302 - 6175.508: 0.0112% ( 2) 00:07:58.772 6175.508 - 6200.714: 0.0843% ( 13) 00:07:58.772 6200.714 - 6225.920: 0.1630% ( 14) 00:07:58.772 6225.920 - 6251.126: 0.4215% ( 46) 00:07:58.772 6251.126 - 6276.332: 0.8206% ( 71) 00:07:58.772 6276.332 - 6301.538: 1.5119% ( 123) 00:07:58.772 6301.538 - 6326.745: 2.2988% ( 140) 00:07:58.772 6326.745 - 6351.951: 3.4510% ( 205) 00:07:58.772 6351.951 - 6377.157: 4.8224% ( 244) 00:07:58.772 6377.157 - 6402.363: 6.3455% ( 271) 00:07:58.772 6402.363 - 6427.569: 7.9699% ( 289) 00:07:58.772 6427.569 - 6452.775: 9.4312% ( 260) 00:07:58.772 6452.775 - 6503.188: 12.6742% ( 577) 00:07:58.772 6503.188 - 6553.600: 16.2826% ( 642) 00:07:58.772 6553.600 - 6604.012: 20.0427% ( 669) 00:07:58.772 6604.012 - 6654.425: 24.3368% ( 764) 00:07:58.772 6654.425 - 6704.837: 28.6646% ( 770) 00:07:58.772 6704.837 - 6755.249: 33.0767% ( 785) 00:07:58.772 6755.249 - 6805.662: 37.7304% ( 828) 00:07:58.772 6805.662 - 6856.074: 42.4629% ( 842) 00:07:58.772 6856.074 - 6906.486: 47.0211% ( 811) 00:07:58.772 6906.486 - 6956.898: 51.8716% ( 863) 00:07:58.772 6956.898 - 7007.311: 56.6041% ( 842) 00:07:58.772 7007.311 - 7057.723: 61.6457% ( 897) 00:07:58.772 7057.723 - 7108.135: 66.3781% ( 842) 00:07:58.772 7108.135 - 7158.548: 71.0151% ( 825) 00:07:58.772 7158.548 - 7208.960: 75.2192% ( 748) 00:07:58.772 7208.960 - 7259.372: 78.9062% ( 656) 00:07:58.772 7259.372 - 7309.785: 81.8458% ( 523) 00:07:58.772 7309.785 - 7360.197: 84.1614% ( 412) 00:07:58.772 7360.197 - 7410.609: 86.0668% ( 339) 00:07:58.772 7410.609 - 7461.022: 87.5899% ( 271) 00:07:58.772 7461.022 - 7511.434: 88.8377% ( 222) 00:07:58.772 7511.434 - 7561.846: 89.8719% ( 184) 00:07:58.772 7561.846 - 7612.258: 90.5463% ( 120) 00:07:58.772 7612.258 - 7662.671: 91.1196% ( 102) 00:07:58.772 7662.671 - 7713.083: 91.6086% ( 87) 00:07:58.772 7713.083 - 7763.495: 92.0133% ( 72) 00:07:58.772 7763.495 - 7813.908: 92.3112% ( 53) 00:07:58.772 7813.908 - 7864.320: 92.5922% ( 50) 00:07:58.772 7864.320 - 7914.732: 92.7889% ( 35) 00:07:58.772 7914.732 - 7965.145: 92.9238% ( 24) 00:07:58.772 7965.145 - 8015.557: 93.0474% ( 22) 00:07:58.772 8015.557 - 8065.969: 93.1430% ( 17) 00:07:58.772 8065.969 - 8116.382: 93.2329% ( 16) 00:07:58.772 8116.382 - 8166.794: 93.3004% ( 12) 00:07:58.772 8166.794 - 8217.206: 93.3790% ( 14) 00:07:58.772 8217.206 - 8267.618: 93.4577% ( 14) 00:07:58.772 8267.618 - 8318.031: 93.5308% ( 13) 00:07:58.772 8318.031 - 8368.443: 93.6263% ( 17) 00:07:58.772 8368.443 - 8418.855: 93.7669% ( 25) 00:07:58.772 8418.855 - 8469.268: 93.8737% ( 19) 00:07:58.772 8469.268 - 8519.680: 94.0254% ( 27) 00:07:58.772 8519.680 - 8570.092: 94.1491% ( 22) 00:07:58.772 8570.092 - 8620.505: 94.2839% ( 24) 00:07:58.772 8620.505 - 8670.917: 94.4020% ( 21) 00:07:58.772 8670.917 - 8721.329: 94.5369% ( 24) 00:07:58.772 8721.329 - 8771.742: 94.6661% ( 23) 00:07:58.772 8771.742 - 8822.154: 94.8629% ( 35) 00:07:58.772 8822.154 - 8872.566: 95.0371% ( 31) 00:07:58.772 8872.566 - 8922.978: 95.2113% ( 31) 00:07:58.772 8922.978 - 8973.391: 95.3631% ( 27) 00:07:58.772 8973.391 - 9023.803: 95.5205% ( 28) 00:07:58.772 9023.803 - 9074.215: 95.6554% ( 24) 00:07:58.772 9074.215 - 9124.628: 95.8127% ( 28) 00:07:58.772 9124.628 - 9175.040: 95.9870% ( 31) 00:07:58.772 9175.040 - 9225.452: 96.1668% ( 32) 00:07:58.772 9225.452 - 9275.865: 96.3579% ( 34) 00:07:58.772 9275.865 - 9326.277: 96.5209% ( 29) 00:07:58.772 9326.277 - 9376.689: 96.6670% ( 26) 00:07:58.772 9376.689 - 9427.102: 96.8076% ( 25) 00:07:58.772 9427.102 - 9477.514: 96.9593% ( 27) 00:07:58.772 9477.514 - 9527.926: 97.0549% ( 17) 00:07:58.772 9527.926 - 9578.338: 97.1673% ( 20) 00:07:58.772 9578.338 - 9628.751: 97.2460% ( 14) 00:07:58.772 9628.751 - 9679.163: 97.3246% ( 14) 00:07:58.772 9679.163 - 9729.575: 97.3865% ( 11) 00:07:58.772 9729.575 - 9779.988: 97.4258% ( 7) 00:07:58.772 9779.988 - 9830.400: 97.4708% ( 8) 00:07:58.772 9830.400 - 9880.812: 97.5157% ( 8) 00:07:58.772 9880.812 - 9931.225: 97.5607% ( 8) 00:07:58.772 9931.225 - 9981.637: 97.6057% ( 8) 00:07:58.772 9981.637 - 10032.049: 97.6562% ( 9) 00:07:58.772 10032.049 - 10082.462: 97.6787% ( 4) 00:07:58.772 10082.462 - 10132.874: 97.7068% ( 5) 00:07:58.772 10132.874 - 10183.286: 97.7237% ( 3) 00:07:58.772 10183.286 - 10233.698: 97.7462% ( 4) 00:07:58.772 10233.698 - 10284.111: 97.7799% ( 6) 00:07:58.772 10284.111 - 10334.523: 97.8136% ( 6) 00:07:58.772 10334.523 - 10384.935: 97.8642% ( 9) 00:07:58.772 10384.935 - 10435.348: 97.9260% ( 11) 00:07:58.772 10435.348 - 10485.760: 97.9485% ( 4) 00:07:58.772 10485.760 - 10536.172: 97.9879% ( 7) 00:07:58.772 10536.172 - 10586.585: 98.0216% ( 6) 00:07:58.772 10586.585 - 10636.997: 98.0497% ( 5) 00:07:58.772 10636.997 - 10687.409: 98.0722% ( 4) 00:07:58.772 10687.409 - 10737.822: 98.1396% ( 12) 00:07:58.772 10737.822 - 10788.234: 98.1958% ( 10) 00:07:58.772 10788.234 - 10838.646: 98.2633% ( 12) 00:07:58.772 10838.646 - 10889.058: 98.2970% ( 6) 00:07:58.772 10889.058 - 10939.471: 98.3363% ( 7) 00:07:58.772 10939.471 - 10989.883: 98.3757% ( 7) 00:07:58.772 10989.883 - 11040.295: 98.4206% ( 8) 00:07:58.772 11040.295 - 11090.708: 98.4656% ( 8) 00:07:58.772 11090.708 - 11141.120: 98.5106% ( 8) 00:07:58.772 11141.120 - 11191.532: 98.5499% ( 7) 00:07:58.772 11191.532 - 11241.945: 98.5836% ( 6) 00:07:58.772 11241.945 - 11292.357: 98.6061% ( 4) 00:07:58.772 11292.357 - 11342.769: 98.6342% ( 5) 00:07:58.772 11342.769 - 11393.182: 98.6511% ( 3) 00:07:58.772 11393.182 - 11443.594: 98.6736% ( 4) 00:07:58.772 11443.594 - 11494.006: 98.7017% ( 5) 00:07:58.772 11494.006 - 11544.418: 98.7241% ( 4) 00:07:58.772 11544.418 - 11594.831: 98.7522% ( 5) 00:07:58.772 11594.831 - 11645.243: 98.7804% ( 5) 00:07:58.772 11645.243 - 11695.655: 98.8028% ( 4) 00:07:58.772 11695.655 - 11746.068: 98.8309% ( 5) 00:07:58.772 11746.068 - 11796.480: 98.8534% ( 4) 00:07:58.772 11796.480 - 11846.892: 98.8759% ( 4) 00:07:58.772 11846.892 - 11897.305: 98.8871% ( 2) 00:07:58.772 11897.305 - 11947.717: 98.8984% ( 2) 00:07:58.772 11947.717 - 11998.129: 98.9096% ( 2) 00:07:58.772 11998.129 - 12048.542: 98.9209% ( 2) 00:07:58.772 13308.849 - 13409.674: 99.0164% ( 17) 00:07:58.772 13409.674 - 13510.498: 99.0333% ( 3) 00:07:58.772 13510.498 - 13611.323: 99.0726% ( 7) 00:07:58.772 13611.323 - 13712.148: 99.1120% ( 7) 00:07:58.772 13712.148 - 13812.972: 99.1457% ( 6) 00:07:58.772 13812.972 - 13913.797: 99.1850% ( 7) 00:07:58.772 13913.797 - 14014.622: 99.2300% ( 8) 00:07:58.772 14014.622 - 14115.446: 99.2693% ( 7) 00:07:58.772 14115.446 - 14216.271: 99.2806% ( 2) 00:07:58.772 22080.591 - 22181.415: 99.2974% ( 3) 00:07:58.772 22181.415 - 22282.240: 99.3143% ( 3) 00:07:58.772 22282.240 - 22383.065: 99.3368% ( 4) 00:07:58.772 22383.065 - 22483.889: 99.3536% ( 3) 00:07:58.772 22483.889 - 22584.714: 99.3705% ( 3) 00:07:58.772 22584.714 - 22685.538: 99.3874% ( 3) 00:07:58.772 22685.538 - 22786.363: 99.4042% ( 3) 00:07:58.772 22786.363 - 22887.188: 99.4211% ( 3) 00:07:58.772 22887.188 - 22988.012: 99.4379% ( 3) 00:07:58.772 22988.012 - 23088.837: 99.4604% ( 4) 00:07:58.772 23088.837 - 23189.662: 99.4773% ( 3) 00:07:58.772 23189.662 - 23290.486: 99.4942% ( 3) 00:07:58.772 23290.486 - 23391.311: 99.5110% ( 3) 00:07:58.772 23391.311 - 23492.135: 99.5279% ( 3) 00:07:58.772 23492.135 - 23592.960: 99.5504% ( 4) 00:07:58.772 23592.960 - 23693.785: 99.5672% ( 3) 00:07:58.772 23693.785 - 23794.609: 99.5841% ( 3) 00:07:58.772 23794.609 - 23895.434: 99.6009% ( 3) 00:07:58.772 23895.434 - 23996.258: 99.6178% ( 3) 00:07:58.772 23996.258 - 24097.083: 99.6403% ( 4) 00:07:58.772 28634.191 - 28835.840: 99.6628% ( 4) 00:07:58.772 28835.840 - 29037.489: 99.7021% ( 7) 00:07:58.772 29037.489 - 29239.138: 99.7358% ( 6) 00:07:58.772 29239.138 - 29440.788: 99.7696% ( 6) 00:07:58.772 29440.788 - 29642.437: 99.8033% ( 6) 00:07:58.772 29642.437 - 29844.086: 99.8426% ( 7) 00:07:58.772 29844.086 - 30045.735: 99.8820% ( 7) 00:07:58.772 30045.735 - 30247.385: 99.9157% ( 6) 00:07:58.772 30247.385 - 30449.034: 99.9550% ( 7) 00:07:58.772 30449.034 - 30650.683: 99.9888% ( 6) 00:07:58.772 30650.683 - 30852.332: 100.0000% ( 2) 00:07:58.772 00:07:58.772 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.772 ============================================================================== 00:07:58.772 Range in us Cumulative IO count 00:07:58.772 6150.302 - 6175.508: 0.0281% ( 5) 00:07:58.772 6175.508 - 6200.714: 0.0618% ( 6) 00:07:58.772 6200.714 - 6225.920: 0.2361% ( 31) 00:07:58.772 6225.920 - 6251.126: 0.5171% ( 50) 00:07:58.772 6251.126 - 6276.332: 0.9330% ( 74) 00:07:58.772 6276.332 - 6301.538: 1.5175% ( 104) 00:07:58.772 6301.538 - 6326.745: 2.3269% ( 144) 00:07:58.772 6326.745 - 6351.951: 3.4510% ( 200) 00:07:58.772 6351.951 - 6377.157: 4.6819% ( 219) 00:07:58.772 6377.157 - 6402.363: 6.1601% ( 263) 00:07:58.772 6402.363 - 6427.569: 7.6270% ( 261) 00:07:58.772 6427.569 - 6452.775: 9.2345% ( 286) 00:07:58.772 6452.775 - 6503.188: 12.9496% ( 661) 00:07:58.772 6503.188 - 6553.600: 16.7210% ( 671) 00:07:58.772 6553.600 - 6604.012: 20.7284% ( 713) 00:07:58.772 6604.012 - 6654.425: 25.1405% ( 785) 00:07:58.772 6654.425 - 6704.837: 29.4795% ( 772) 00:07:58.772 6704.837 - 6755.249: 33.9816% ( 801) 00:07:58.772 6755.249 - 6805.662: 38.5061% ( 805) 00:07:58.772 6805.662 - 6856.074: 43.0699% ( 812) 00:07:58.773 6856.074 - 6906.486: 47.5438% ( 796) 00:07:58.773 6906.486 - 6956.898: 52.2201% ( 832) 00:07:58.773 6956.898 - 7007.311: 56.8514% ( 824) 00:07:58.773 7007.311 - 7057.723: 61.5782% ( 841) 00:07:58.773 7057.723 - 7108.135: 66.2770% ( 836) 00:07:58.773 7108.135 - 7158.548: 70.8746% ( 818) 00:07:58.773 7158.548 - 7208.960: 75.0112% ( 736) 00:07:58.773 7208.960 - 7259.372: 78.5803% ( 635) 00:07:58.773 7259.372 - 7309.785: 81.3399% ( 491) 00:07:58.773 7309.785 - 7360.197: 83.7286% ( 425) 00:07:58.773 7360.197 - 7410.609: 85.6115% ( 335) 00:07:58.773 7410.609 - 7461.022: 87.2134% ( 285) 00:07:58.773 7461.022 - 7511.434: 88.4948% ( 228) 00:07:58.773 7511.434 - 7561.846: 89.4840% ( 176) 00:07:58.773 7561.846 - 7612.258: 90.2316% ( 133) 00:07:58.773 7612.258 - 7662.671: 90.8498% ( 110) 00:07:58.773 7662.671 - 7713.083: 91.3838% ( 95) 00:07:58.773 7713.083 - 7763.495: 91.8446% ( 82) 00:07:58.773 7763.495 - 7813.908: 92.2549% ( 73) 00:07:58.773 7813.908 - 7864.320: 92.5922% ( 60) 00:07:58.773 7864.320 - 7914.732: 92.7945% ( 36) 00:07:58.773 7914.732 - 7965.145: 92.9575% ( 29) 00:07:58.773 7965.145 - 8015.557: 93.0868% ( 23) 00:07:58.773 8015.557 - 8065.969: 93.1992% ( 20) 00:07:58.773 8065.969 - 8116.382: 93.3004% ( 18) 00:07:58.773 8116.382 - 8166.794: 93.4240% ( 22) 00:07:58.773 8166.794 - 8217.206: 93.5252% ( 18) 00:07:58.773 8217.206 - 8267.618: 93.6376% ( 20) 00:07:58.773 8267.618 - 8318.031: 93.7219% ( 15) 00:07:58.773 8318.031 - 8368.443: 93.7893% ( 12) 00:07:58.773 8368.443 - 8418.855: 93.8624% ( 13) 00:07:58.773 8418.855 - 8469.268: 93.9580% ( 17) 00:07:58.773 8469.268 - 8519.680: 94.0704% ( 20) 00:07:58.773 8519.680 - 8570.092: 94.1772% ( 19) 00:07:58.773 8570.092 - 8620.505: 94.2839% ( 19) 00:07:58.773 8620.505 - 8670.917: 94.3795% ( 17) 00:07:58.773 8670.917 - 8721.329: 94.4863% ( 19) 00:07:58.773 8721.329 - 8771.742: 94.5875% ( 18) 00:07:58.773 8771.742 - 8822.154: 94.7055% ( 21) 00:07:58.773 8822.154 - 8872.566: 94.8404% ( 24) 00:07:58.773 8872.566 - 8922.978: 95.0090% ( 30) 00:07:58.773 8922.978 - 8973.391: 95.1551% ( 26) 00:07:58.773 8973.391 - 9023.803: 95.3575% ( 36) 00:07:58.773 9023.803 - 9074.215: 95.5205% ( 29) 00:07:58.773 9074.215 - 9124.628: 95.6722% ( 27) 00:07:58.773 9124.628 - 9175.040: 95.8240% ( 27) 00:07:58.773 9175.040 - 9225.452: 95.9701% ( 26) 00:07:58.773 9225.452 - 9275.865: 96.1275% ( 28) 00:07:58.773 9275.865 - 9326.277: 96.2848% ( 28) 00:07:58.773 9326.277 - 9376.689: 96.4478% ( 29) 00:07:58.773 9376.689 - 9427.102: 96.5996% ( 27) 00:07:58.773 9427.102 - 9477.514: 96.7345% ( 24) 00:07:58.773 9477.514 - 9527.926: 96.8581% ( 22) 00:07:58.773 9527.926 - 9578.338: 96.9987% ( 25) 00:07:58.773 9578.338 - 9628.751: 97.1167% ( 21) 00:07:58.773 9628.751 - 9679.163: 97.2347% ( 21) 00:07:58.773 9679.163 - 9729.575: 97.3190% ( 15) 00:07:58.773 9729.575 - 9779.988: 97.4033% ( 15) 00:07:58.773 9779.988 - 9830.400: 97.4708% ( 12) 00:07:58.773 9830.400 - 9880.812: 97.5438% ( 13) 00:07:58.773 9880.812 - 9931.225: 97.6113% ( 12) 00:07:58.773 9931.225 - 9981.637: 97.6844% ( 13) 00:07:58.773 9981.637 - 10032.049: 97.7518% ( 12) 00:07:58.773 10032.049 - 10082.462: 97.8249% ( 13) 00:07:58.773 10082.462 - 10132.874: 97.8811% ( 10) 00:07:58.773 10132.874 - 10183.286: 97.9204% ( 7) 00:07:58.773 10183.286 - 10233.698: 97.9429% ( 4) 00:07:58.773 10233.698 - 10284.111: 97.9598% ( 3) 00:07:58.773 10284.111 - 10334.523: 97.9766% ( 3) 00:07:58.773 10334.523 - 10384.935: 97.9991% ( 4) 00:07:58.773 10384.935 - 10435.348: 98.0272% ( 5) 00:07:58.773 10435.348 - 10485.760: 98.0834% ( 10) 00:07:58.773 10485.760 - 10536.172: 98.1171% ( 6) 00:07:58.773 10536.172 - 10586.585: 98.1396% ( 4) 00:07:58.773 10586.585 - 10636.997: 98.1902% ( 9) 00:07:58.773 10636.997 - 10687.409: 98.2127% ( 4) 00:07:58.773 10687.409 - 10737.822: 98.2464% ( 6) 00:07:58.773 10737.822 - 10788.234: 98.2745% ( 5) 00:07:58.773 10788.234 - 10838.646: 98.3195% ( 8) 00:07:58.773 10838.646 - 10889.058: 98.3644% ( 8) 00:07:58.773 10889.058 - 10939.471: 98.3925% ( 5) 00:07:58.773 10939.471 - 10989.883: 98.4206% ( 5) 00:07:58.773 10989.883 - 11040.295: 98.4431% ( 4) 00:07:58.773 11040.295 - 11090.708: 98.4712% ( 5) 00:07:58.773 11090.708 - 11141.120: 98.4993% ( 5) 00:07:58.773 11141.120 - 11191.532: 98.5218% ( 4) 00:07:58.773 11191.532 - 11241.945: 98.5499% ( 5) 00:07:58.773 11241.945 - 11292.357: 98.5724% ( 4) 00:07:58.773 11292.357 - 11342.769: 98.6005% ( 5) 00:07:58.773 11342.769 - 11393.182: 98.6230% ( 4) 00:07:58.773 11393.182 - 11443.594: 98.6455% ( 4) 00:07:58.773 11443.594 - 11494.006: 98.6679% ( 4) 00:07:58.773 11494.006 - 11544.418: 98.7017% ( 6) 00:07:58.773 11544.418 - 11594.831: 98.7241% ( 4) 00:07:58.773 11594.831 - 11645.243: 98.7410% ( 3) 00:07:58.773 11645.243 - 11695.655: 98.7522% ( 2) 00:07:58.773 11695.655 - 11746.068: 98.7635% ( 2) 00:07:58.773 11746.068 - 11796.480: 98.7747% ( 2) 00:07:58.773 11796.480 - 11846.892: 98.7860% ( 2) 00:07:58.773 11846.892 - 11897.305: 98.7972% ( 2) 00:07:58.773 11897.305 - 11947.717: 98.8085% ( 2) 00:07:58.773 11947.717 - 11998.129: 98.8197% ( 2) 00:07:58.773 11998.129 - 12048.542: 98.8309% ( 2) 00:07:58.773 12048.542 - 12098.954: 98.8422% ( 2) 00:07:58.773 12098.954 - 12149.366: 98.8534% ( 2) 00:07:58.773 12149.366 - 12199.778: 98.8647% ( 2) 00:07:58.773 12199.778 - 12250.191: 98.8759% ( 2) 00:07:58.773 12250.191 - 12300.603: 98.8871% ( 2) 00:07:58.773 12300.603 - 12351.015: 98.8984% ( 2) 00:07:58.773 12351.015 - 12401.428: 98.9096% ( 2) 00:07:58.773 12401.428 - 12451.840: 98.9209% ( 2) 00:07:58.773 12603.077 - 12653.489: 98.9321% ( 2) 00:07:58.773 12653.489 - 12703.902: 98.9433% ( 2) 00:07:58.773 12703.902 - 12754.314: 98.9546% ( 2) 00:07:58.773 12754.314 - 12804.726: 98.9658% ( 2) 00:07:58.773 12804.726 - 12855.138: 98.9714% ( 1) 00:07:58.773 12855.138 - 12905.551: 98.9827% ( 2) 00:07:58.773 12905.551 - 13006.375: 99.0052% ( 4) 00:07:58.773 13006.375 - 13107.200: 99.0277% ( 4) 00:07:58.773 13107.200 - 13208.025: 99.0501% ( 4) 00:07:58.773 13208.025 - 13308.849: 99.0670% ( 3) 00:07:58.773 13308.849 - 13409.674: 99.0895% ( 4) 00:07:58.773 13409.674 - 13510.498: 99.1120% ( 4) 00:07:58.773 13510.498 - 13611.323: 99.1344% ( 4) 00:07:58.773 13611.323 - 13712.148: 99.1569% ( 4) 00:07:58.773 13712.148 - 13812.972: 99.1794% ( 4) 00:07:58.773 13812.972 - 13913.797: 99.2019% ( 4) 00:07:58.773 13913.797 - 14014.622: 99.2188% ( 3) 00:07:58.773 14014.622 - 14115.446: 99.2412% ( 4) 00:07:58.773 14115.446 - 14216.271: 99.2637% ( 4) 00:07:58.773 14216.271 - 14317.095: 99.2806% ( 3) 00:07:58.773 20366.572 - 20467.397: 99.2862% ( 1) 00:07:58.773 20467.397 - 20568.222: 99.2974% ( 2) 00:07:58.773 20568.222 - 20669.046: 99.3199% ( 4) 00:07:58.773 20669.046 - 20769.871: 99.3368% ( 3) 00:07:58.773 20769.871 - 20870.695: 99.3536% ( 3) 00:07:58.773 20870.695 - 20971.520: 99.3705% ( 3) 00:07:58.773 20971.520 - 21072.345: 99.3817% ( 2) 00:07:58.773 21072.345 - 21173.169: 99.3930% ( 2) 00:07:58.773 21173.169 - 21273.994: 99.4042% ( 2) 00:07:58.773 21273.994 - 21374.818: 99.4211% ( 3) 00:07:58.773 21374.818 - 21475.643: 99.4379% ( 3) 00:07:58.773 21475.643 - 21576.468: 99.4604% ( 4) 00:07:58.773 21576.468 - 21677.292: 99.4773% ( 3) 00:07:58.773 21677.292 - 21778.117: 99.4942% ( 3) 00:07:58.773 21778.117 - 21878.942: 99.5110% ( 3) 00:07:58.773 21878.942 - 21979.766: 99.5279% ( 3) 00:07:58.773 21979.766 - 22080.591: 99.5447% ( 3) 00:07:58.773 22080.591 - 22181.415: 99.5616% ( 3) 00:07:58.773 22181.415 - 22282.240: 99.5785% ( 3) 00:07:58.773 22282.240 - 22383.065: 99.5953% ( 3) 00:07:58.773 22383.065 - 22483.889: 99.6178% ( 4) 00:07:58.773 22483.889 - 22584.714: 99.6347% ( 3) 00:07:58.773 22584.714 - 22685.538: 99.6403% ( 1) 00:07:58.773 27020.997 - 27222.646: 99.6515% ( 2) 00:07:58.773 27222.646 - 27424.295: 99.6853% ( 6) 00:07:58.773 27424.295 - 27625.945: 99.7190% ( 6) 00:07:58.773 27625.945 - 27827.594: 99.7527% ( 6) 00:07:58.773 27827.594 - 28029.243: 99.7977% ( 8) 00:07:58.773 28029.243 - 28230.892: 99.8426% ( 8) 00:07:58.773 28230.892 - 28432.542: 99.8876% ( 8) 00:07:58.773 28432.542 - 28634.191: 99.9213% ( 6) 00:07:58.773 28634.191 - 28835.840: 99.9663% ( 8) 00:07:58.773 28835.840 - 29037.489: 100.0000% ( 6) 00:07:58.773 00:07:58.773 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.773 ============================================================================== 00:07:58.773 Range in us Cumulative IO count 00:07:58.773 6125.095 - 6150.302: 0.0169% ( 3) 00:07:58.773 6175.508 - 6200.714: 0.0393% ( 4) 00:07:58.773 6200.714 - 6225.920: 0.1124% ( 13) 00:07:58.773 6225.920 - 6251.126: 0.2866% ( 31) 00:07:58.773 6251.126 - 6276.332: 0.6295% ( 61) 00:07:58.773 6276.332 - 6301.538: 1.1353% ( 90) 00:07:58.773 6301.538 - 6326.745: 2.0121% ( 156) 00:07:58.773 6326.745 - 6351.951: 3.0632% ( 187) 00:07:58.773 6351.951 - 6377.157: 4.5189% ( 259) 00:07:58.773 6377.157 - 6402.363: 5.8566% ( 238) 00:07:58.773 6402.363 - 6427.569: 7.4640% ( 286) 00:07:58.773 6427.569 - 6452.775: 9.1108% ( 293) 00:07:58.773 6452.775 - 6503.188: 12.3651% ( 579) 00:07:58.773 6503.188 - 6553.600: 15.7599% ( 604) 00:07:58.773 6553.600 - 6604.012: 19.8572% ( 729) 00:07:58.773 6604.012 - 6654.425: 24.1738% ( 768) 00:07:58.773 6654.425 - 6704.837: 28.7095% ( 807) 00:07:58.774 6704.837 - 6755.249: 33.0767% ( 777) 00:07:58.774 6755.249 - 6805.662: 37.6237% ( 809) 00:07:58.774 6805.662 - 6856.074: 42.1425% ( 804) 00:07:58.774 6856.074 - 6906.486: 46.7457% ( 819) 00:07:58.774 6906.486 - 6956.898: 51.5344% ( 852) 00:07:58.774 6956.898 - 7007.311: 56.4130% ( 868) 00:07:58.774 7007.311 - 7057.723: 61.3028% ( 870) 00:07:58.774 7057.723 - 7108.135: 66.2826% ( 886) 00:07:58.774 7108.135 - 7158.548: 70.8296% ( 809) 00:07:58.774 7158.548 - 7208.960: 75.0955% ( 759) 00:07:58.774 7208.960 - 7259.372: 78.7208% ( 645) 00:07:58.774 7259.372 - 7309.785: 81.5704% ( 507) 00:07:58.774 7309.785 - 7360.197: 84.0209% ( 436) 00:07:58.774 7360.197 - 7410.609: 85.9263% ( 339) 00:07:58.774 7410.609 - 7461.022: 87.4663% ( 274) 00:07:58.774 7461.022 - 7511.434: 88.6803% ( 216) 00:07:58.774 7511.434 - 7561.846: 89.6864% ( 179) 00:07:58.774 7561.846 - 7612.258: 90.4114% ( 129) 00:07:58.774 7612.258 - 7662.671: 90.9510% ( 96) 00:07:58.774 7662.671 - 7713.083: 91.3557% ( 72) 00:07:58.774 7713.083 - 7763.495: 91.7603% ( 72) 00:07:58.774 7763.495 - 7813.908: 92.0920% ( 59) 00:07:58.774 7813.908 - 7864.320: 92.4573% ( 65) 00:07:58.774 7864.320 - 7914.732: 92.7271% ( 48) 00:07:58.774 7914.732 - 7965.145: 92.9912% ( 47) 00:07:58.774 7965.145 - 8015.557: 93.1598% ( 30) 00:07:58.774 8015.557 - 8065.969: 93.3172% ( 28) 00:07:58.774 8065.969 - 8116.382: 93.4577% ( 25) 00:07:58.774 8116.382 - 8166.794: 93.5701% ( 20) 00:07:58.774 8166.794 - 8217.206: 93.6826% ( 20) 00:07:58.774 8217.206 - 8267.618: 93.7781% ( 17) 00:07:58.774 8267.618 - 8318.031: 93.8793% ( 18) 00:07:58.774 8318.031 - 8368.443: 93.9973% ( 21) 00:07:58.774 8368.443 - 8418.855: 94.1266% ( 23) 00:07:58.774 8418.855 - 8469.268: 94.2671% ( 25) 00:07:58.774 8469.268 - 8519.680: 94.3795% ( 20) 00:07:58.774 8519.680 - 8570.092: 94.5144% ( 24) 00:07:58.774 8570.092 - 8620.505: 94.6493% ( 24) 00:07:58.774 8620.505 - 8670.917: 94.7842% ( 24) 00:07:58.774 8670.917 - 8721.329: 94.9303% ( 26) 00:07:58.774 8721.329 - 8771.742: 95.0596% ( 23) 00:07:58.774 8771.742 - 8822.154: 95.1439% ( 15) 00:07:58.774 8822.154 - 8872.566: 95.2507% ( 19) 00:07:58.774 8872.566 - 8922.978: 95.3631% ( 20) 00:07:58.774 8922.978 - 8973.391: 95.4867% ( 22) 00:07:58.774 8973.391 - 9023.803: 95.5935% ( 19) 00:07:58.774 9023.803 - 9074.215: 95.7172% ( 22) 00:07:58.774 9074.215 - 9124.628: 95.8352% ( 21) 00:07:58.774 9124.628 - 9175.040: 96.0038% ( 30) 00:07:58.774 9175.040 - 9225.452: 96.1162% ( 20) 00:07:58.774 9225.452 - 9275.865: 96.2343% ( 21) 00:07:58.774 9275.865 - 9326.277: 96.3298% ( 17) 00:07:58.774 9326.277 - 9376.689: 96.4197% ( 16) 00:07:58.774 9376.689 - 9427.102: 96.5603% ( 25) 00:07:58.774 9427.102 - 9477.514: 96.6727% ( 20) 00:07:58.774 9477.514 - 9527.926: 96.7795% ( 19) 00:07:58.774 9527.926 - 9578.338: 96.8806% ( 18) 00:07:58.774 9578.338 - 9628.751: 96.9705% ( 16) 00:07:58.774 9628.751 - 9679.163: 97.0830% ( 20) 00:07:58.774 9679.163 - 9729.575: 97.1954% ( 20) 00:07:58.774 9729.575 - 9779.988: 97.3134% ( 21) 00:07:58.774 9779.988 - 9830.400: 97.4258% ( 20) 00:07:58.774 9830.400 - 9880.812: 97.5270% ( 18) 00:07:58.774 9880.812 - 9931.225: 97.5944% ( 12) 00:07:58.774 9931.225 - 9981.637: 97.6675% ( 13) 00:07:58.774 9981.637 - 10032.049: 97.7349% ( 12) 00:07:58.774 10032.049 - 10082.462: 97.8024% ( 12) 00:07:58.774 10082.462 - 10132.874: 97.8530% ( 9) 00:07:58.774 10132.874 - 10183.286: 97.9485% ( 17) 00:07:58.774 10183.286 - 10233.698: 97.9935% ( 8) 00:07:58.774 10233.698 - 10284.111: 98.0384% ( 8) 00:07:58.774 10284.111 - 10334.523: 98.0890% ( 9) 00:07:58.774 10334.523 - 10384.935: 98.1284% ( 7) 00:07:58.774 10384.935 - 10435.348: 98.1846% ( 10) 00:07:58.774 10435.348 - 10485.760: 98.2352% ( 9) 00:07:58.774 10485.760 - 10536.172: 98.2801% ( 8) 00:07:58.774 10536.172 - 10586.585: 98.3251% ( 8) 00:07:58.774 10586.585 - 10636.997: 98.3532% ( 5) 00:07:58.774 10636.997 - 10687.409: 98.3701% ( 3) 00:07:58.774 10687.409 - 10737.822: 98.3869% ( 3) 00:07:58.774 10737.822 - 10788.234: 98.3982% ( 2) 00:07:58.774 10788.234 - 10838.646: 98.4150% ( 3) 00:07:58.774 10838.646 - 10889.058: 98.4263% ( 2) 00:07:58.774 10889.058 - 10939.471: 98.4431% ( 3) 00:07:58.774 10939.471 - 10989.883: 98.4600% ( 3) 00:07:58.774 10989.883 - 11040.295: 98.4712% ( 2) 00:07:58.774 11040.295 - 11090.708: 98.4881% ( 3) 00:07:58.774 11090.708 - 11141.120: 98.4993% ( 2) 00:07:58.774 11141.120 - 11191.532: 98.5162% ( 3) 00:07:58.774 11191.532 - 11241.945: 98.5274% ( 2) 00:07:58.774 11241.945 - 11292.357: 98.5443% ( 3) 00:07:58.774 11292.357 - 11342.769: 98.5612% ( 3) 00:07:58.774 11544.418 - 11594.831: 98.5724% ( 2) 00:07:58.774 11594.831 - 11645.243: 98.5780% ( 1) 00:07:58.774 11645.243 - 11695.655: 98.5893% ( 2) 00:07:58.774 11695.655 - 11746.068: 98.6005% ( 2) 00:07:58.774 11746.068 - 11796.480: 98.6117% ( 2) 00:07:58.774 11796.480 - 11846.892: 98.6230% ( 2) 00:07:58.774 11846.892 - 11897.305: 98.6342% ( 2) 00:07:58.774 11897.305 - 11947.717: 98.6567% ( 4) 00:07:58.774 11947.717 - 11998.129: 98.6792% ( 4) 00:07:58.774 11998.129 - 12048.542: 98.7017% ( 4) 00:07:58.774 12048.542 - 12098.954: 98.7241% ( 4) 00:07:58.774 12098.954 - 12149.366: 98.7466% ( 4) 00:07:58.774 12149.366 - 12199.778: 98.7691% ( 4) 00:07:58.774 12199.778 - 12250.191: 98.7972% ( 5) 00:07:58.774 12250.191 - 12300.603: 98.8197% ( 4) 00:07:58.774 12300.603 - 12351.015: 98.8366% ( 3) 00:07:58.774 12351.015 - 12401.428: 98.8590% ( 4) 00:07:58.774 12401.428 - 12451.840: 98.8815% ( 4) 00:07:58.774 12451.840 - 12502.252: 98.8984% ( 3) 00:07:58.774 12502.252 - 12552.665: 98.9209% ( 4) 00:07:58.774 12552.665 - 12603.077: 98.9433% ( 4) 00:07:58.774 12603.077 - 12653.489: 98.9658% ( 4) 00:07:58.774 12653.489 - 12703.902: 98.9883% ( 4) 00:07:58.774 12703.902 - 12754.314: 99.0108% ( 4) 00:07:58.774 12754.314 - 12804.726: 99.0333% ( 4) 00:07:58.774 12804.726 - 12855.138: 99.0558% ( 4) 00:07:58.774 12855.138 - 12905.551: 99.0782% ( 4) 00:07:58.774 12905.551 - 13006.375: 99.1232% ( 8) 00:07:58.774 13006.375 - 13107.200: 99.1682% ( 8) 00:07:58.774 13107.200 - 13208.025: 99.2131% ( 8) 00:07:58.774 13208.025 - 13308.849: 99.2356% ( 4) 00:07:58.774 13308.849 - 13409.674: 99.2581% ( 4) 00:07:58.774 13409.674 - 13510.498: 99.2750% ( 3) 00:07:58.774 13510.498 - 13611.323: 99.2806% ( 1) 00:07:58.774 18854.203 - 18955.028: 99.2918% ( 2) 00:07:58.774 18955.028 - 19055.852: 99.3031% ( 2) 00:07:58.774 19055.852 - 19156.677: 99.3255% ( 4) 00:07:58.774 19156.677 - 19257.502: 99.3368% ( 2) 00:07:58.774 19257.502 - 19358.326: 99.3536% ( 3) 00:07:58.774 19358.326 - 19459.151: 99.3817% ( 5) 00:07:58.774 19459.151 - 19559.975: 99.4042% ( 4) 00:07:58.774 19559.975 - 19660.800: 99.4211% ( 3) 00:07:58.774 19660.800 - 19761.625: 99.4379% ( 3) 00:07:58.774 19761.625 - 19862.449: 99.4548% ( 3) 00:07:58.774 19862.449 - 19963.274: 99.4717% ( 3) 00:07:58.774 19963.274 - 20064.098: 99.4885% ( 3) 00:07:58.774 20064.098 - 20164.923: 99.4998% ( 2) 00:07:58.774 20164.923 - 20265.748: 99.5166% ( 3) 00:07:58.774 20265.748 - 20366.572: 99.5391% ( 4) 00:07:58.774 20366.572 - 20467.397: 99.5560% ( 3) 00:07:58.774 20467.397 - 20568.222: 99.5728% ( 3) 00:07:58.774 20568.222 - 20669.046: 99.5897% ( 3) 00:07:58.774 20669.046 - 20769.871: 99.6122% ( 4) 00:07:58.774 20769.871 - 20870.695: 99.6290% ( 3) 00:07:58.774 20870.695 - 20971.520: 99.6403% ( 2) 00:07:58.774 25206.154 - 25306.978: 99.6571% ( 3) 00:07:58.774 25306.978 - 25407.803: 99.6796% ( 4) 00:07:58.774 25407.803 - 25508.628: 99.7021% ( 4) 00:07:58.774 25508.628 - 25609.452: 99.7190% ( 3) 00:07:58.774 25609.452 - 25710.277: 99.7415% ( 4) 00:07:58.774 25710.277 - 25811.102: 99.7639% ( 4) 00:07:58.774 25811.102 - 26012.751: 99.8033% ( 7) 00:07:58.774 26012.751 - 26214.400: 99.8482% ( 8) 00:07:58.774 26214.400 - 26416.049: 99.8876% ( 7) 00:07:58.774 26416.049 - 26617.698: 99.9326% ( 8) 00:07:58.774 26617.698 - 26819.348: 99.9719% ( 7) 00:07:58.774 26819.348 - 27020.997: 100.0000% ( 5) 00:07:58.774 00:07:58.774 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.774 ============================================================================== 00:07:58.774 Range in us Cumulative IO count 00:07:58.774 6175.508 - 6200.714: 0.1012% ( 18) 00:07:58.774 6200.714 - 6225.920: 0.2473% ( 26) 00:07:58.774 6225.920 - 6251.126: 0.5002% ( 45) 00:07:58.774 6251.126 - 6276.332: 0.8318% ( 59) 00:07:58.774 6276.332 - 6301.538: 1.4501% ( 110) 00:07:58.774 6301.538 - 6326.745: 2.1189% ( 119) 00:07:58.774 6326.745 - 6351.951: 3.2037% ( 193) 00:07:58.774 6351.951 - 6377.157: 4.3109% ( 197) 00:07:58.774 6377.157 - 6402.363: 5.6655% ( 241) 00:07:58.774 6402.363 - 6427.569: 7.2786% ( 287) 00:07:58.774 6427.569 - 6452.775: 9.0153% ( 309) 00:07:58.774 6452.775 - 6503.188: 12.6855% ( 653) 00:07:58.774 6503.188 - 6553.600: 16.1589% ( 618) 00:07:58.774 6553.600 - 6604.012: 20.0146% ( 686) 00:07:58.774 6604.012 - 6654.425: 23.9152% ( 694) 00:07:58.774 6654.425 - 6704.837: 28.3723% ( 793) 00:07:58.774 6704.837 - 6755.249: 33.1216% ( 845) 00:07:58.774 6755.249 - 6805.662: 37.6518% ( 806) 00:07:58.774 6805.662 - 6856.074: 42.1650% ( 803) 00:07:58.774 6856.074 - 6906.486: 46.7457% ( 815) 00:07:58.774 6906.486 - 6956.898: 51.4838% ( 843) 00:07:58.774 6956.898 - 7007.311: 56.2050% ( 840) 00:07:58.774 7007.311 - 7057.723: 61.0387% ( 860) 00:07:58.774 7057.723 - 7108.135: 65.9004% ( 865) 00:07:58.774 7108.135 - 7158.548: 70.5261% ( 823) 00:07:58.775 7158.548 - 7208.960: 74.8145% ( 763) 00:07:58.775 7208.960 - 7259.372: 78.5746% ( 669) 00:07:58.775 7259.372 - 7309.785: 81.5142% ( 523) 00:07:58.775 7309.785 - 7360.197: 83.8635% ( 418) 00:07:58.775 7360.197 - 7410.609: 85.7689% ( 339) 00:07:58.775 7410.609 - 7461.022: 87.2920% ( 271) 00:07:58.775 7461.022 - 7511.434: 88.5679% ( 227) 00:07:58.775 7511.434 - 7561.846: 89.5796% ( 180) 00:07:58.775 7561.846 - 7612.258: 90.3271% ( 133) 00:07:58.775 7612.258 - 7662.671: 90.8611% ( 95) 00:07:58.775 7662.671 - 7713.083: 91.3613% ( 89) 00:07:58.775 7713.083 - 7763.495: 91.7828% ( 75) 00:07:58.775 7763.495 - 7813.908: 92.1538% ( 66) 00:07:58.775 7813.908 - 7864.320: 92.4685% ( 56) 00:07:58.775 7864.320 - 7914.732: 92.6652% ( 35) 00:07:58.775 7914.732 - 7965.145: 92.8451% ( 32) 00:07:58.775 7965.145 - 8015.557: 93.0081% ( 29) 00:07:58.775 8015.557 - 8065.969: 93.1205% ( 20) 00:07:58.775 8065.969 - 8116.382: 93.2835% ( 29) 00:07:58.775 8116.382 - 8166.794: 93.4296% ( 26) 00:07:58.775 8166.794 - 8217.206: 93.6039% ( 31) 00:07:58.775 8217.206 - 8267.618: 93.7837% ( 32) 00:07:58.775 8267.618 - 8318.031: 93.9692% ( 33) 00:07:58.775 8318.031 - 8368.443: 94.1322% ( 29) 00:07:58.775 8368.443 - 8418.855: 94.2952% ( 29) 00:07:58.775 8418.855 - 8469.268: 94.4188% ( 22) 00:07:58.775 8469.268 - 8519.680: 94.5537% ( 24) 00:07:58.775 8519.680 - 8570.092: 94.6549% ( 18) 00:07:58.775 8570.092 - 8620.505: 94.7504% ( 17) 00:07:58.775 8620.505 - 8670.917: 94.8629% ( 20) 00:07:58.775 8670.917 - 8721.329: 94.9584% ( 17) 00:07:58.775 8721.329 - 8771.742: 95.0652% ( 19) 00:07:58.775 8771.742 - 8822.154: 95.1720% ( 19) 00:07:58.775 8822.154 - 8872.566: 95.3462% ( 31) 00:07:58.775 8872.566 - 8922.978: 95.5092% ( 29) 00:07:58.775 8922.978 - 8973.391: 95.6610% ( 27) 00:07:58.775 8973.391 - 9023.803: 95.7734% ( 20) 00:07:58.775 9023.803 - 9074.215: 95.8633% ( 16) 00:07:58.775 9074.215 - 9124.628: 95.9476% ( 15) 00:07:58.775 9124.628 - 9175.040: 96.0375% ( 16) 00:07:58.775 9175.040 - 9225.452: 96.1275% ( 16) 00:07:58.775 9225.452 - 9275.865: 96.2174% ( 16) 00:07:58.775 9275.865 - 9326.277: 96.2905% ( 13) 00:07:58.775 9326.277 - 9376.689: 96.3748% ( 15) 00:07:58.775 9376.689 - 9427.102: 96.4928% ( 21) 00:07:58.775 9427.102 - 9477.514: 96.5996% ( 19) 00:07:58.775 9477.514 - 9527.926: 96.7232% ( 22) 00:07:58.775 9527.926 - 9578.338: 96.8581% ( 24) 00:07:58.775 9578.338 - 9628.751: 96.9537% ( 17) 00:07:58.775 9628.751 - 9679.163: 97.0773% ( 22) 00:07:58.775 9679.163 - 9729.575: 97.2179% ( 25) 00:07:58.775 9729.575 - 9779.988: 97.3415% ( 22) 00:07:58.775 9779.988 - 9830.400: 97.4202% ( 14) 00:07:58.775 9830.400 - 9880.812: 97.4989% ( 14) 00:07:58.775 9880.812 - 9931.225: 97.5719% ( 13) 00:07:58.775 9931.225 - 9981.637: 97.6619% ( 16) 00:07:58.775 9981.637 - 10032.049: 97.7518% ( 16) 00:07:58.775 10032.049 - 10082.462: 97.8305% ( 14) 00:07:58.775 10082.462 - 10132.874: 97.8923% ( 11) 00:07:58.775 10132.874 - 10183.286: 97.9654% ( 13) 00:07:58.775 10183.286 - 10233.698: 98.0272% ( 11) 00:07:58.775 10233.698 - 10284.111: 98.0946% ( 12) 00:07:58.775 10284.111 - 10334.523: 98.1452% ( 9) 00:07:58.775 10334.523 - 10384.935: 98.1846% ( 7) 00:07:58.775 10384.935 - 10435.348: 98.2183% ( 6) 00:07:58.775 10435.348 - 10485.760: 98.2464% ( 5) 00:07:58.775 10485.760 - 10536.172: 98.2914% ( 8) 00:07:58.775 10536.172 - 10586.585: 98.3195% ( 5) 00:07:58.775 10586.585 - 10636.997: 98.3420% ( 4) 00:07:58.775 10636.997 - 10687.409: 98.3757% ( 6) 00:07:58.775 10687.409 - 10737.822: 98.3982% ( 4) 00:07:58.775 10737.822 - 10788.234: 98.4319% ( 6) 00:07:58.775 10788.234 - 10838.646: 98.4600% ( 5) 00:07:58.775 10838.646 - 10889.058: 98.4825% ( 4) 00:07:58.775 10889.058 - 10939.471: 98.4993% ( 3) 00:07:58.775 10939.471 - 10989.883: 98.5106% ( 2) 00:07:58.775 10989.883 - 11040.295: 98.5274% ( 3) 00:07:58.775 11040.295 - 11090.708: 98.5387% ( 2) 00:07:58.775 11090.708 - 11141.120: 98.5499% ( 2) 00:07:58.775 11141.120 - 11191.532: 98.5724% ( 4) 00:07:58.775 11191.532 - 11241.945: 98.5893% ( 3) 00:07:58.775 11241.945 - 11292.357: 98.6005% ( 2) 00:07:58.775 11292.357 - 11342.769: 98.6061% ( 1) 00:07:58.775 11342.769 - 11393.182: 98.6174% ( 2) 00:07:58.775 11393.182 - 11443.594: 98.6286% ( 2) 00:07:58.775 11443.594 - 11494.006: 98.6398% ( 2) 00:07:58.775 11494.006 - 11544.418: 98.6511% ( 2) 00:07:58.775 11544.418 - 11594.831: 98.6623% ( 2) 00:07:58.775 11594.831 - 11645.243: 98.6736% ( 2) 00:07:58.775 11645.243 - 11695.655: 98.6848% ( 2) 00:07:58.775 11695.655 - 11746.068: 98.6904% ( 1) 00:07:58.775 11746.068 - 11796.480: 98.7017% ( 2) 00:07:58.775 11796.480 - 11846.892: 98.7129% ( 2) 00:07:58.775 11846.892 - 11897.305: 98.7241% ( 2) 00:07:58.775 11897.305 - 11947.717: 98.7354% ( 2) 00:07:58.775 11947.717 - 11998.129: 98.7466% ( 2) 00:07:58.775 11998.129 - 12048.542: 98.7579% ( 2) 00:07:58.775 12048.542 - 12098.954: 98.7691% ( 2) 00:07:58.775 12098.954 - 12149.366: 98.7804% ( 2) 00:07:58.775 12149.366 - 12199.778: 98.7916% ( 2) 00:07:58.775 12199.778 - 12250.191: 98.8028% ( 2) 00:07:58.775 12250.191 - 12300.603: 98.8141% ( 2) 00:07:58.775 12300.603 - 12351.015: 98.8253% ( 2) 00:07:58.775 12351.015 - 12401.428: 98.8366% ( 2) 00:07:58.775 12401.428 - 12451.840: 98.8534% ( 3) 00:07:58.775 12451.840 - 12502.252: 98.8815% ( 5) 00:07:58.775 12502.252 - 12552.665: 98.9096% ( 5) 00:07:58.775 12552.665 - 12603.077: 98.9265% ( 3) 00:07:58.775 12603.077 - 12653.489: 98.9658% ( 7) 00:07:58.775 12653.489 - 12703.902: 99.0108% ( 8) 00:07:58.775 12703.902 - 12754.314: 99.0389% ( 5) 00:07:58.775 12754.314 - 12804.726: 99.0558% ( 3) 00:07:58.775 12905.551 - 13006.375: 99.0839% ( 5) 00:07:58.775 13006.375 - 13107.200: 99.1063% ( 4) 00:07:58.775 13107.200 - 13208.025: 99.1344% ( 5) 00:07:58.775 13208.025 - 13308.849: 99.1569% ( 4) 00:07:58.775 13308.849 - 13409.674: 99.1794% ( 4) 00:07:58.775 13409.674 - 13510.498: 99.2075% ( 5) 00:07:58.775 13510.498 - 13611.323: 99.2300% ( 4) 00:07:58.775 13611.323 - 13712.148: 99.2581% ( 5) 00:07:58.775 13712.148 - 13812.972: 99.2806% ( 4) 00:07:58.775 17341.834 - 17442.658: 99.3143% ( 6) 00:07:58.775 17442.658 - 17543.483: 99.3255% ( 2) 00:07:58.775 17543.483 - 17644.308: 99.3368% ( 2) 00:07:58.775 17644.308 - 17745.132: 99.3536% ( 3) 00:07:58.775 17745.132 - 17845.957: 99.3761% ( 4) 00:07:58.775 17845.957 - 17946.782: 99.3930% ( 3) 00:07:58.775 17946.782 - 18047.606: 99.4155% ( 4) 00:07:58.775 18047.606 - 18148.431: 99.4323% ( 3) 00:07:58.775 18148.431 - 18249.255: 99.4492% ( 3) 00:07:58.775 18249.255 - 18350.080: 99.4717% ( 4) 00:07:58.775 18350.080 - 18450.905: 99.4829% ( 2) 00:07:58.775 18450.905 - 18551.729: 99.4998% ( 3) 00:07:58.775 18551.729 - 18652.554: 99.5166% ( 3) 00:07:58.775 18652.554 - 18753.378: 99.5391% ( 4) 00:07:58.775 18753.378 - 18854.203: 99.5560% ( 3) 00:07:58.775 18854.203 - 18955.028: 99.5728% ( 3) 00:07:58.775 18955.028 - 19055.852: 99.5897% ( 3) 00:07:58.775 19055.852 - 19156.677: 99.6066% ( 3) 00:07:58.775 19156.677 - 19257.502: 99.6234% ( 3) 00:07:58.775 19257.502 - 19358.326: 99.6403% ( 3) 00:07:58.775 23290.486 - 23391.311: 99.6515% ( 2) 00:07:58.775 23391.311 - 23492.135: 99.6684% ( 3) 00:07:58.775 23492.135 - 23592.960: 99.6909% ( 4) 00:07:58.775 23592.960 - 23693.785: 99.7134% ( 4) 00:07:58.775 23693.785 - 23794.609: 99.7302% ( 3) 00:07:58.775 23794.609 - 23895.434: 99.7527% ( 4) 00:07:58.775 23895.434 - 23996.258: 99.7752% ( 4) 00:07:58.775 23996.258 - 24097.083: 99.7977% ( 4) 00:07:58.775 24097.083 - 24197.908: 99.8201% ( 4) 00:07:58.775 24197.908 - 24298.732: 99.8370% ( 3) 00:07:58.775 24298.732 - 24399.557: 99.8595% ( 4) 00:07:58.775 24399.557 - 24500.382: 99.8820% ( 4) 00:07:58.775 24500.382 - 24601.206: 99.8988% ( 3) 00:07:58.775 24601.206 - 24702.031: 99.9213% ( 4) 00:07:58.775 24702.031 - 24802.855: 99.9382% ( 3) 00:07:58.776 24802.855 - 24903.680: 99.9607% ( 4) 00:07:58.776 24903.680 - 25004.505: 99.9831% ( 4) 00:07:58.776 25004.505 - 25105.329: 100.0000% ( 3) 00:07:58.776 00:07:58.776 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.776 ============================================================================== 00:07:58.776 Range in us Cumulative IO count 00:07:58.776 6175.508 - 6200.714: 0.0674% ( 12) 00:07:58.776 6200.714 - 6225.920: 0.1574% ( 16) 00:07:58.776 6225.920 - 6251.126: 0.3991% ( 43) 00:07:58.776 6251.126 - 6276.332: 0.7363% ( 60) 00:07:58.776 6276.332 - 6301.538: 1.3602% ( 111) 00:07:58.776 6301.538 - 6326.745: 2.1246% ( 136) 00:07:58.776 6326.745 - 6351.951: 2.9732% ( 151) 00:07:58.776 6351.951 - 6377.157: 4.2041% ( 219) 00:07:58.776 6377.157 - 6402.363: 5.7161% ( 269) 00:07:58.776 6402.363 - 6427.569: 7.1942% ( 263) 00:07:58.776 6427.569 - 6452.775: 8.9029% ( 304) 00:07:58.776 6452.775 - 6503.188: 12.5562% ( 650) 00:07:58.776 6503.188 - 6553.600: 16.2039% ( 649) 00:07:58.776 6553.600 - 6604.012: 19.8123% ( 642) 00:07:58.776 6604.012 - 6654.425: 23.9883% ( 743) 00:07:58.776 6654.425 - 6704.837: 28.5128% ( 805) 00:07:58.776 6704.837 - 6755.249: 33.0654% ( 810) 00:07:58.776 6755.249 - 6805.662: 37.7866% ( 840) 00:07:58.776 6805.662 - 6856.074: 42.3168% ( 806) 00:07:58.776 6856.074 - 6906.486: 47.0268% ( 838) 00:07:58.776 6906.486 - 6956.898: 51.7480% ( 840) 00:07:58.776 6956.898 - 7007.311: 56.5423% ( 853) 00:07:58.776 7007.311 - 7057.723: 61.3028% ( 847) 00:07:58.776 7057.723 - 7108.135: 66.0578% ( 846) 00:07:58.776 7108.135 - 7158.548: 70.8464% ( 852) 00:07:58.776 7158.548 - 7208.960: 75.1068% ( 758) 00:07:58.776 7208.960 - 7259.372: 78.7152% ( 642) 00:07:58.776 7259.372 - 7309.785: 81.6434% ( 521) 00:07:58.776 7309.785 - 7360.197: 84.0378% ( 426) 00:07:58.776 7360.197 - 7410.609: 85.9825% ( 346) 00:07:58.776 7410.609 - 7461.022: 87.4944% ( 269) 00:07:58.776 7461.022 - 7511.434: 88.7028% ( 215) 00:07:58.776 7511.434 - 7561.846: 89.6583% ( 170) 00:07:58.776 7561.846 - 7612.258: 90.4058% ( 133) 00:07:58.776 7612.258 - 7662.671: 90.9847% ( 103) 00:07:58.776 7662.671 - 7713.083: 91.4175% ( 77) 00:07:58.776 7713.083 - 7763.495: 91.8559% ( 78) 00:07:58.776 7763.495 - 7813.908: 92.2044% ( 62) 00:07:58.776 7813.908 - 7864.320: 92.4854% ( 50) 00:07:58.776 7864.320 - 7914.732: 92.6990% ( 38) 00:07:58.776 7914.732 - 7965.145: 92.8957% ( 35) 00:07:58.776 7965.145 - 8015.557: 93.0812% ( 33) 00:07:58.776 8015.557 - 8065.969: 93.2048% ( 22) 00:07:58.776 8065.969 - 8116.382: 93.3285% ( 22) 00:07:58.776 8116.382 - 8166.794: 93.4240% ( 17) 00:07:58.776 8166.794 - 8217.206: 93.5252% ( 18) 00:07:58.776 8217.206 - 8267.618: 93.7050% ( 32) 00:07:58.776 8267.618 - 8318.031: 93.8399% ( 24) 00:07:58.776 8318.031 - 8368.443: 93.9861% ( 26) 00:07:58.776 8368.443 - 8418.855: 94.1210% ( 24) 00:07:58.776 8418.855 - 8469.268: 94.2502% ( 23) 00:07:58.776 8469.268 - 8519.680: 94.3907% ( 25) 00:07:58.776 8519.680 - 8570.092: 94.5031% ( 20) 00:07:58.776 8570.092 - 8620.505: 94.6156% ( 20) 00:07:58.776 8620.505 - 8670.917: 94.7280% ( 20) 00:07:58.776 8670.917 - 8721.329: 94.8516% ( 22) 00:07:58.776 8721.329 - 8771.742: 95.0259% ( 31) 00:07:58.776 8771.742 - 8822.154: 95.1664% ( 25) 00:07:58.776 8822.154 - 8872.566: 95.2563% ( 16) 00:07:58.776 8872.566 - 8922.978: 95.3406% ( 15) 00:07:58.776 8922.978 - 8973.391: 95.4305% ( 16) 00:07:58.776 8973.391 - 9023.803: 95.5486% ( 21) 00:07:58.776 9023.803 - 9074.215: 95.6835% ( 24) 00:07:58.776 9074.215 - 9124.628: 95.8240% ( 25) 00:07:58.776 9124.628 - 9175.040: 95.9532% ( 23) 00:07:58.776 9175.040 - 9225.452: 96.0713% ( 21) 00:07:58.776 9225.452 - 9275.865: 96.1837% ( 20) 00:07:58.776 9275.865 - 9326.277: 96.2961% ( 20) 00:07:58.776 9326.277 - 9376.689: 96.4029% ( 19) 00:07:58.776 9376.689 - 9427.102: 96.5040% ( 18) 00:07:58.776 9427.102 - 9477.514: 96.6108% ( 19) 00:07:58.776 9477.514 - 9527.926: 96.7064% ( 17) 00:07:58.776 9527.926 - 9578.338: 96.8019% ( 17) 00:07:58.776 9578.338 - 9628.751: 96.8975% ( 17) 00:07:58.776 9628.751 - 9679.163: 96.9705% ( 13) 00:07:58.776 9679.163 - 9729.575: 97.0773% ( 19) 00:07:58.776 9729.575 - 9779.988: 97.2010% ( 22) 00:07:58.776 9779.988 - 9830.400: 97.3190% ( 21) 00:07:58.776 9830.400 - 9880.812: 97.4314% ( 20) 00:07:58.776 9880.812 - 9931.225: 97.5270% ( 17) 00:07:58.776 9931.225 - 9981.637: 97.6057% ( 14) 00:07:58.776 9981.637 - 10032.049: 97.6450% ( 7) 00:07:58.776 10032.049 - 10082.462: 97.7125% ( 12) 00:07:58.776 10082.462 - 10132.874: 97.7968% ( 15) 00:07:58.776 10132.874 - 10183.286: 97.8642% ( 12) 00:07:58.776 10183.286 - 10233.698: 97.9317% ( 12) 00:07:58.776 10233.698 - 10284.111: 97.9822% ( 9) 00:07:58.776 10284.111 - 10334.523: 98.0216% ( 7) 00:07:58.776 10334.523 - 10384.935: 98.0609% ( 7) 00:07:58.776 10384.935 - 10435.348: 98.1115% ( 9) 00:07:58.776 10435.348 - 10485.760: 98.1565% ( 8) 00:07:58.776 10485.760 - 10536.172: 98.2295% ( 13) 00:07:58.776 10536.172 - 10586.585: 98.2914% ( 11) 00:07:58.776 10586.585 - 10636.997: 98.3532% ( 11) 00:07:58.776 10636.997 - 10687.409: 98.3925% ( 7) 00:07:58.776 10687.409 - 10737.822: 98.4206% ( 5) 00:07:58.776 10737.822 - 10788.234: 98.4656% ( 8) 00:07:58.776 10788.234 - 10838.646: 98.5106% ( 8) 00:07:58.776 10838.646 - 10889.058: 98.5612% ( 9) 00:07:58.776 10889.058 - 10939.471: 98.6005% ( 7) 00:07:58.776 10939.471 - 10989.883: 98.6286% ( 5) 00:07:58.776 10989.883 - 11040.295: 98.6623% ( 6) 00:07:58.776 11040.295 - 11090.708: 98.6848% ( 4) 00:07:58.776 11090.708 - 11141.120: 98.7073% ( 4) 00:07:58.776 11141.120 - 11191.532: 98.7298% ( 4) 00:07:58.776 11191.532 - 11241.945: 98.7410% ( 2) 00:07:58.776 11241.945 - 11292.357: 98.7522% ( 2) 00:07:58.776 11292.357 - 11342.769: 98.7635% ( 2) 00:07:58.776 11342.769 - 11393.182: 98.7691% ( 1) 00:07:58.776 11393.182 - 11443.594: 98.7804% ( 2) 00:07:58.776 11443.594 - 11494.006: 98.7916% ( 2) 00:07:58.776 11494.006 - 11544.418: 98.8028% ( 2) 00:07:58.776 11544.418 - 11594.831: 98.8141% ( 2) 00:07:58.776 11594.831 - 11645.243: 98.8197% ( 1) 00:07:58.776 11645.243 - 11695.655: 98.8309% ( 2) 00:07:58.776 11695.655 - 11746.068: 98.8422% ( 2) 00:07:58.776 11746.068 - 11796.480: 98.8534% ( 2) 00:07:58.776 11796.480 - 11846.892: 98.8647% ( 2) 00:07:58.776 11846.892 - 11897.305: 98.8759% ( 2) 00:07:58.776 11897.305 - 11947.717: 98.8871% ( 2) 00:07:58.776 11947.717 - 11998.129: 98.8984% ( 2) 00:07:58.776 11998.129 - 12048.542: 98.9096% ( 2) 00:07:58.776 12048.542 - 12098.954: 98.9209% ( 2) 00:07:58.776 13006.375 - 13107.200: 98.9433% ( 4) 00:07:58.776 13107.200 - 13208.025: 98.9658% ( 4) 00:07:58.776 13208.025 - 13308.849: 99.0164% ( 9) 00:07:58.776 13409.674 - 13510.498: 99.0501% ( 6) 00:07:58.776 13510.498 - 13611.323: 99.0895% ( 7) 00:07:58.776 13611.323 - 13712.148: 99.1232% ( 6) 00:07:58.776 13712.148 - 13812.972: 99.1625% ( 7) 00:07:58.776 13812.972 - 13913.797: 99.2075% ( 8) 00:07:58.776 13913.797 - 14014.622: 99.2469% ( 7) 00:07:58.776 14014.622 - 14115.446: 99.2806% ( 6) 00:07:58.776 15829.465 - 15930.289: 99.3199% ( 7) 00:07:58.776 15930.289 - 16031.114: 99.3536% ( 6) 00:07:58.776 16031.114 - 16131.938: 99.3649% ( 2) 00:07:58.776 16131.938 - 16232.763: 99.3817% ( 3) 00:07:58.776 16232.763 - 16333.588: 99.3874% ( 1) 00:07:58.776 16333.588 - 16434.412: 99.4098% ( 4) 00:07:58.776 16434.412 - 16535.237: 99.4267% ( 3) 00:07:58.776 16535.237 - 16636.062: 99.4492% ( 4) 00:07:58.776 16636.062 - 16736.886: 99.4661% ( 3) 00:07:58.776 16736.886 - 16837.711: 99.4885% ( 4) 00:07:58.776 16837.711 - 16938.535: 99.5054% ( 3) 00:07:58.776 16938.535 - 17039.360: 99.5223% ( 3) 00:07:58.776 17039.360 - 17140.185: 99.5335% ( 2) 00:07:58.776 17140.185 - 17241.009: 99.5560% ( 4) 00:07:58.776 17241.009 - 17341.834: 99.5728% ( 3) 00:07:58.776 17341.834 - 17442.658: 99.5897% ( 3) 00:07:58.776 17442.658 - 17543.483: 99.6066% ( 3) 00:07:58.776 17543.483 - 17644.308: 99.6234% ( 3) 00:07:58.776 17644.308 - 17745.132: 99.6403% ( 3) 00:07:58.776 21374.818 - 21475.643: 99.6571% ( 3) 00:07:58.776 21475.643 - 21576.468: 99.6796% ( 4) 00:07:58.776 21576.468 - 21677.292: 99.7021% ( 4) 00:07:58.776 21677.292 - 21778.117: 99.7246% ( 4) 00:07:58.776 21778.117 - 21878.942: 99.7471% ( 4) 00:07:58.776 21878.942 - 21979.766: 99.7639% ( 3) 00:07:58.776 21979.766 - 22080.591: 99.7808% ( 3) 00:07:58.776 22080.591 - 22181.415: 99.8033% ( 4) 00:07:58.776 22181.415 - 22282.240: 99.8258% ( 4) 00:07:58.776 22282.240 - 22383.065: 99.8426% ( 3) 00:07:58.776 22383.065 - 22483.889: 99.8651% ( 4) 00:07:58.776 22483.889 - 22584.714: 99.8876% ( 4) 00:07:58.776 22584.714 - 22685.538: 99.9045% ( 3) 00:07:58.776 22685.538 - 22786.363: 99.9269% ( 4) 00:07:58.776 22786.363 - 22887.188: 99.9494% ( 4) 00:07:58.776 22887.188 - 22988.012: 99.9719% ( 4) 00:07:58.776 22988.012 - 23088.837: 99.9888% ( 3) 00:07:58.776 23088.837 - 23189.662: 100.0000% ( 2) 00:07:58.776 00:07:58.776 19:06:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:59.718 Initializing NVMe Controllers 00:07:59.718 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.718 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.718 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:59.718 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:59.718 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:59.718 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:59.718 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:59.718 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:59.718 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:59.718 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:59.718 Initialization complete. Launching workers. 00:07:59.718 ======================================================== 00:07:59.718 Latency(us) 00:07:59.718 Device Information : IOPS MiB/s Average min max 00:07:59.718 PCIE (0000:00:10.0) NSID 1 from core 0: 12429.00 145.65 10314.92 6228.32 35312.74 00:07:59.718 PCIE (0000:00:11.0) NSID 1 from core 0: 12429.00 145.65 10297.63 6394.75 33250.12 00:07:59.718 PCIE (0000:00:13.0) NSID 1 from core 0: 12429.00 145.65 10279.28 6316.64 31925.76 00:07:59.718 PCIE (0000:00:12.0) NSID 1 from core 0: 12429.00 145.65 10261.49 6292.73 29843.83 00:07:59.718 PCIE (0000:00:12.0) NSID 2 from core 0: 12429.00 145.65 10243.69 6418.15 27843.45 00:07:59.718 PCIE (0000:00:12.0) NSID 3 from core 0: 12492.74 146.40 10174.35 6508.77 21766.13 00:07:59.718 ======================================================== 00:07:59.718 Total : 74637.76 874.66 10261.82 6228.32 35312.74 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6755.249us 00:07:59.718 10.00000% : 7813.908us 00:07:59.718 25.00000% : 9175.040us 00:07:59.718 50.00000% : 10233.698us 00:07:59.718 75.00000% : 11040.295us 00:07:59.718 90.00000% : 12048.542us 00:07:59.718 95.00000% : 13107.200us 00:07:59.718 98.00000% : 14518.745us 00:07:59.718 99.00000% : 28029.243us 00:07:59.718 99.50000% : 33473.772us 00:07:59.718 99.90000% : 35086.966us 00:07:59.718 99.99000% : 35288.615us 00:07:59.718 99.99900% : 35490.265us 00:07:59.718 99.99990% : 35490.265us 00:07:59.718 99.99999% : 35490.265us 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6755.249us 00:07:59.718 10.00000% : 7763.495us 00:07:59.718 25.00000% : 9225.452us 00:07:59.718 50.00000% : 10233.698us 00:07:59.718 75.00000% : 10989.883us 00:07:59.718 90.00000% : 11998.129us 00:07:59.718 95.00000% : 12905.551us 00:07:59.718 98.00000% : 14720.394us 00:07:59.718 99.00000% : 26416.049us 00:07:59.718 99.50000% : 31658.929us 00:07:59.718 99.90000% : 33070.474us 00:07:59.718 99.99000% : 33272.123us 00:07:59.718 99.99900% : 33272.123us 00:07:59.718 99.99990% : 33272.123us 00:07:59.718 99.99999% : 33272.123us 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6805.662us 00:07:59.718 10.00000% : 7965.145us 00:07:59.718 25.00000% : 9175.040us 00:07:59.718 50.00000% : 10233.698us 00:07:59.718 75.00000% : 10989.883us 00:07:59.718 90.00000% : 11998.129us 00:07:59.718 95.00000% : 13107.200us 00:07:59.718 98.00000% : 14720.394us 00:07:59.718 99.00000% : 24500.382us 00:07:59.718 99.50000% : 30045.735us 00:07:59.718 99.90000% : 31658.929us 00:07:59.718 99.99000% : 32062.228us 00:07:59.718 99.99900% : 32062.228us 00:07:59.718 99.99990% : 32062.228us 00:07:59.718 99.99999% : 32062.228us 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6805.662us 00:07:59.718 10.00000% : 7864.320us 00:07:59.718 25.00000% : 9175.040us 00:07:59.718 50.00000% : 10233.698us 00:07:59.718 75.00000% : 10989.883us 00:07:59.718 90.00000% : 11947.717us 00:07:59.718 95.00000% : 13006.375us 00:07:59.718 98.00000% : 14720.394us 00:07:59.718 99.00000% : 22786.363us 00:07:59.718 99.50000% : 28230.892us 00:07:59.718 99.90000% : 29642.437us 00:07:59.718 99.99000% : 29844.086us 00:07:59.718 99.99900% : 29844.086us 00:07:59.718 99.99990% : 29844.086us 00:07:59.718 99.99999% : 29844.086us 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6755.249us 00:07:59.718 10.00000% : 7813.908us 00:07:59.718 25.00000% : 9225.452us 00:07:59.718 50.00000% : 10284.111us 00:07:59.718 75.00000% : 10989.883us 00:07:59.718 90.00000% : 11846.892us 00:07:59.718 95.00000% : 13006.375us 00:07:59.718 98.00000% : 14922.043us 00:07:59.718 99.00000% : 20769.871us 00:07:59.718 99.50000% : 26214.400us 00:07:59.718 99.90000% : 27625.945us 00:07:59.718 99.99000% : 27827.594us 00:07:59.718 99.99900% : 28029.243us 00:07:59.718 99.99990% : 28029.243us 00:07:59.718 99.99999% : 28029.243us 00:07:59.718 00:07:59.718 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.718 ================================================================================= 00:07:59.718 1.00000% : 6755.249us 00:07:59.718 10.00000% : 7813.908us 00:07:59.718 25.00000% : 9175.040us 00:07:59.718 50.00000% : 10284.111us 00:07:59.718 75.00000% : 10939.471us 00:07:59.718 90.00000% : 11897.305us 00:07:59.718 95.00000% : 13308.849us 00:07:59.718 98.00000% : 14619.569us 00:07:59.718 99.00000% : 15426.166us 00:07:59.718 99.50000% : 20064.098us 00:07:59.718 99.90000% : 21475.643us 00:07:59.718 99.99000% : 21778.117us 00:07:59.718 99.99900% : 21778.117us 00:07:59.718 99.99990% : 21778.117us 00:07:59.718 99.99999% : 21778.117us 00:07:59.718 00:07:59.718 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.718 ============================================================================== 00:07:59.718 Range in us Cumulative IO count 00:07:59.718 6225.920 - 6251.126: 0.0240% ( 3) 00:07:59.718 6251.126 - 6276.332: 0.0561% ( 4) 00:07:59.718 6276.332 - 6301.538: 0.0801% ( 3) 00:07:59.718 6301.538 - 6326.745: 0.1122% ( 4) 00:07:59.718 6326.745 - 6351.951: 0.1362% ( 3) 00:07:59.718 6351.951 - 6377.157: 0.1843% ( 6) 00:07:59.718 6377.157 - 6402.363: 0.1923% ( 1) 00:07:59.718 6402.363 - 6427.569: 0.2244% ( 4) 00:07:59.718 6427.569 - 6452.775: 0.2404% ( 2) 00:07:59.718 6452.775 - 6503.188: 0.3205% ( 10) 00:07:59.718 6503.188 - 6553.600: 0.4728% ( 19) 00:07:59.718 6553.600 - 6604.012: 0.6090% ( 17) 00:07:59.718 6604.012 - 6654.425: 0.7692% ( 20) 00:07:59.718 6654.425 - 6704.837: 0.9295% ( 20) 00:07:59.718 6704.837 - 6755.249: 1.3061% ( 47) 00:07:59.718 6755.249 - 6805.662: 1.6026% ( 37) 00:07:59.718 6805.662 - 6856.074: 1.8750% ( 34) 00:07:59.718 6856.074 - 6906.486: 2.2196% ( 43) 00:07:59.718 6906.486 - 6956.898: 2.5000% ( 35) 00:07:59.718 6956.898 - 7007.311: 2.8526% ( 44) 00:07:59.718 7007.311 - 7057.723: 3.2051% ( 44) 00:07:59.718 7057.723 - 7108.135: 3.5497% ( 43) 00:07:59.718 7108.135 - 7158.548: 3.8381% ( 36) 00:07:59.718 7158.548 - 7208.960: 4.1667% ( 41) 00:07:59.718 7208.960 - 7259.372: 4.5913% ( 53) 00:07:59.718 7259.372 - 7309.785: 5.0160% ( 53) 00:07:59.718 7309.785 - 7360.197: 5.4567% ( 55) 00:07:59.718 7360.197 - 7410.609: 5.9295% ( 59) 00:07:59.718 7410.609 - 7461.022: 6.4984% ( 71) 00:07:59.718 7461.022 - 7511.434: 6.8189% ( 40) 00:07:59.718 7511.434 - 7561.846: 7.1074% ( 36) 00:07:59.718 7561.846 - 7612.258: 7.6843% ( 72) 00:07:59.718 7612.258 - 7662.671: 8.1731% ( 61) 00:07:59.718 7662.671 - 7713.083: 8.8061% ( 79) 00:07:59.718 7713.083 - 7763.495: 9.5353% ( 91) 00:07:59.718 7763.495 - 7813.908: 10.0641% ( 66) 00:07:59.718 7813.908 - 7864.320: 10.5128% ( 56) 00:07:59.718 7864.320 - 7914.732: 10.8654% ( 44) 00:07:59.718 7914.732 - 7965.145: 11.1619% ( 37) 00:07:59.718 7965.145 - 8015.557: 11.6106% ( 56) 00:07:59.718 8015.557 - 8065.969: 12.0112% ( 50) 00:07:59.718 8065.969 - 8116.382: 12.3958% ( 48) 00:07:59.718 8116.382 - 8166.794: 12.9567% ( 70) 00:07:59.718 8166.794 - 8217.206: 13.4856% ( 66) 00:07:59.718 8217.206 - 8267.618: 13.9663% ( 60) 00:07:59.718 8267.618 - 8318.031: 14.5032% ( 67) 00:07:59.718 8318.031 - 8368.443: 14.8958% ( 49) 00:07:59.718 8368.443 - 8418.855: 15.1843% ( 36) 00:07:59.718 8418.855 - 8469.268: 15.5849% ( 50) 00:07:59.718 8469.268 - 8519.680: 16.2019% ( 77) 00:07:59.718 8519.680 - 8570.092: 16.8269% ( 78) 00:07:59.718 8570.092 - 8620.505: 17.4199% ( 74) 00:07:59.718 8620.505 - 8670.917: 17.8045% ( 48) 00:07:59.718 8670.917 - 8721.329: 18.4135% ( 76) 00:07:59.718 8721.329 - 8771.742: 19.0144% ( 75) 00:07:59.718 8771.742 - 8822.154: 19.6074% ( 74) 00:07:59.718 8822.154 - 8872.566: 20.2404% ( 79) 00:07:59.718 8872.566 - 8922.978: 20.9535% ( 89) 00:07:59.718 8922.978 - 8973.391: 21.6026% ( 81) 00:07:59.718 8973.391 - 9023.803: 22.4119% ( 101) 00:07:59.718 9023.803 - 9074.215: 23.3494% ( 117) 00:07:59.718 9074.215 - 9124.628: 24.4712% ( 140) 00:07:59.718 9124.628 - 9175.040: 25.5288% ( 132) 00:07:59.718 9175.040 - 9225.452: 26.3622% ( 104) 00:07:59.718 9225.452 - 9275.865: 27.1955% ( 104) 00:07:59.718 9275.865 - 9326.277: 27.9487% ( 94) 00:07:59.718 9326.277 - 9376.689: 28.8622% ( 114) 00:07:59.718 9376.689 - 9427.102: 29.7276% ( 108) 00:07:59.718 9427.102 - 9477.514: 30.6490% ( 115) 00:07:59.718 9477.514 - 9527.926: 31.5224% ( 109) 00:07:59.718 9527.926 - 9578.338: 32.5481% ( 128) 00:07:59.718 9578.338 - 9628.751: 33.8141% ( 158) 00:07:59.718 9628.751 - 9679.163: 35.0240% ( 151) 00:07:59.718 9679.163 - 9729.575: 35.9776% ( 119) 00:07:59.718 9729.575 - 9779.988: 37.1955% ( 152) 00:07:59.718 9779.988 - 9830.400: 38.5016% ( 163) 00:07:59.718 9830.400 - 9880.812: 40.1282% ( 203) 00:07:59.718 9880.812 - 9931.225: 41.7788% ( 206) 00:07:59.718 9931.225 - 9981.637: 43.3814% ( 200) 00:07:59.718 9981.637 - 10032.049: 44.7917% ( 176) 00:07:59.718 10032.049 - 10082.462: 46.2981% ( 188) 00:07:59.718 10082.462 - 10132.874: 47.9888% ( 211) 00:07:59.718 10132.874 - 10183.286: 49.7276% ( 217) 00:07:59.718 10183.286 - 10233.698: 51.0978% ( 171) 00:07:59.718 10233.698 - 10284.111: 52.5561% ( 182) 00:07:59.718 10284.111 - 10334.523: 54.0064% ( 181) 00:07:59.718 10334.523 - 10384.935: 55.4728% ( 183) 00:07:59.718 10384.935 - 10435.348: 56.9631% ( 186) 00:07:59.718 10435.348 - 10485.760: 58.5657% ( 200) 00:07:59.718 10485.760 - 10536.172: 60.3045% ( 217) 00:07:59.718 10536.172 - 10586.585: 61.7147% ( 176) 00:07:59.718 10586.585 - 10636.997: 63.5176% ( 225) 00:07:59.718 10636.997 - 10687.409: 65.0801% ( 195) 00:07:59.718 10687.409 - 10737.822: 66.7228% ( 205) 00:07:59.718 10737.822 - 10788.234: 68.4615% ( 217) 00:07:59.718 10788.234 - 10838.646: 70.3125% ( 231) 00:07:59.718 10838.646 - 10889.058: 71.8029% ( 186) 00:07:59.718 10889.058 - 10939.471: 73.1731% ( 171) 00:07:59.718 10939.471 - 10989.883: 74.4151% ( 155) 00:07:59.718 10989.883 - 11040.295: 75.5529% ( 142) 00:07:59.718 11040.295 - 11090.708: 76.7388% ( 148) 00:07:59.718 11090.708 - 11141.120: 77.9167% ( 147) 00:07:59.718 11141.120 - 11191.532: 79.1587% ( 155) 00:07:59.718 11191.532 - 11241.945: 80.2003% ( 130) 00:07:59.718 11241.945 - 11292.357: 81.2580% ( 132) 00:07:59.718 11292.357 - 11342.769: 82.2196% ( 120) 00:07:59.718 11342.769 - 11393.182: 83.1010% ( 110) 00:07:59.718 11393.182 - 11443.594: 83.8141% ( 89) 00:07:59.718 11443.594 - 11494.006: 84.5433% ( 91) 00:07:59.718 11494.006 - 11544.418: 85.4327% ( 111) 00:07:59.718 11544.418 - 11594.831: 86.1058% ( 84) 00:07:59.718 11594.831 - 11645.243: 86.6506% ( 68) 00:07:59.718 11645.243 - 11695.655: 87.0753% ( 53) 00:07:59.719 11695.655 - 11746.068: 87.5881% ( 64) 00:07:59.719 11746.068 - 11796.480: 87.9006% ( 39) 00:07:59.719 11796.480 - 11846.892: 88.2612% ( 45) 00:07:59.719 11846.892 - 11897.305: 88.7099% ( 56) 00:07:59.719 11897.305 - 11947.717: 89.1907% ( 60) 00:07:59.719 11947.717 - 11998.129: 89.5353% ( 43) 00:07:59.719 11998.129 - 12048.542: 90.0000% ( 58) 00:07:59.719 12048.542 - 12098.954: 90.3686% ( 46) 00:07:59.719 12098.954 - 12149.366: 90.8173% ( 56) 00:07:59.719 12149.366 - 12199.778: 91.1298% ( 39) 00:07:59.719 12199.778 - 12250.191: 91.5545% ( 53) 00:07:59.719 12250.191 - 12300.603: 91.8510% ( 37) 00:07:59.719 12300.603 - 12351.015: 92.1074% ( 32) 00:07:59.719 12351.015 - 12401.428: 92.3718% ( 33) 00:07:59.719 12401.428 - 12451.840: 92.6202% ( 31) 00:07:59.719 12451.840 - 12502.252: 92.9407% ( 40) 00:07:59.719 12502.252 - 12552.665: 93.1090% ( 21) 00:07:59.719 12552.665 - 12603.077: 93.3013% ( 24) 00:07:59.719 12603.077 - 12653.489: 93.4696% ( 21) 00:07:59.719 12653.489 - 12703.902: 93.6699% ( 25) 00:07:59.719 12703.902 - 12754.314: 93.8462% ( 22) 00:07:59.719 12754.314 - 12804.726: 94.0224% ( 22) 00:07:59.719 12804.726 - 12855.138: 94.2147% ( 24) 00:07:59.719 12855.138 - 12905.551: 94.3830% ( 21) 00:07:59.719 12905.551 - 13006.375: 94.8237% ( 55) 00:07:59.719 13006.375 - 13107.200: 95.2163% ( 49) 00:07:59.719 13107.200 - 13208.025: 95.4167% ( 25) 00:07:59.719 13208.025 - 13308.849: 95.6170% ( 25) 00:07:59.719 13308.849 - 13409.674: 95.8093% ( 24) 00:07:59.719 13409.674 - 13510.498: 96.0176% ( 26) 00:07:59.719 13510.498 - 13611.323: 96.3782% ( 45) 00:07:59.719 13611.323 - 13712.148: 96.6106% ( 29) 00:07:59.719 13712.148 - 13812.972: 96.8349% ( 28) 00:07:59.719 13812.972 - 13913.797: 96.9631% ( 16) 00:07:59.719 13913.797 - 14014.622: 97.1394% ( 22) 00:07:59.719 14014.622 - 14115.446: 97.3558% ( 27) 00:07:59.719 14115.446 - 14216.271: 97.6522% ( 37) 00:07:59.719 14216.271 - 14317.095: 97.7804% ( 16) 00:07:59.719 14317.095 - 14417.920: 97.9247% ( 18) 00:07:59.719 14417.920 - 14518.745: 98.1410% ( 27) 00:07:59.719 14518.745 - 14619.569: 98.4135% ( 34) 00:07:59.719 14619.569 - 14720.394: 98.5337% ( 15) 00:07:59.719 14720.394 - 14821.218: 98.6779% ( 18) 00:07:59.719 14821.218 - 14922.043: 98.7260% ( 6) 00:07:59.719 14922.043 - 15022.868: 98.7500% ( 3) 00:07:59.719 15022.868 - 15123.692: 98.7821% ( 4) 00:07:59.719 15123.692 - 15224.517: 98.8141% ( 4) 00:07:59.719 15224.517 - 15325.342: 98.8542% ( 5) 00:07:59.719 15325.342 - 15426.166: 98.8782% ( 3) 00:07:59.719 15426.166 - 15526.991: 98.9183% ( 5) 00:07:59.719 15526.991 - 15627.815: 98.9423% ( 3) 00:07:59.719 15627.815 - 15728.640: 98.9744% ( 4) 00:07:59.719 27625.945 - 27827.594: 98.9984% ( 3) 00:07:59.719 27827.594 - 28029.243: 99.0625% ( 8) 00:07:59.719 28029.243 - 28230.892: 99.1106% ( 6) 00:07:59.719 28230.892 - 28432.542: 99.1667% ( 7) 00:07:59.719 28432.542 - 28634.191: 99.2228% ( 7) 00:07:59.719 28634.191 - 28835.840: 99.2788% ( 7) 00:07:59.719 28835.840 - 29037.489: 99.3349% ( 7) 00:07:59.719 29037.489 - 29239.138: 99.3910% ( 7) 00:07:59.719 29239.138 - 29440.788: 99.4391% ( 6) 00:07:59.719 29440.788 - 29642.437: 99.4872% ( 6) 00:07:59.719 33272.123 - 33473.772: 99.5192% ( 4) 00:07:59.719 33473.772 - 33675.422: 99.5673% ( 6) 00:07:59.719 33675.422 - 33877.071: 99.6234% ( 7) 00:07:59.719 33877.071 - 34078.720: 99.6715% ( 6) 00:07:59.719 34078.720 - 34280.369: 99.7276% ( 7) 00:07:59.719 34280.369 - 34482.018: 99.7756% ( 6) 00:07:59.719 34482.018 - 34683.668: 99.8317% ( 7) 00:07:59.719 34683.668 - 34885.317: 99.8878% ( 7) 00:07:59.719 34885.317 - 35086.966: 99.9359% ( 6) 00:07:59.719 35086.966 - 35288.615: 99.9920% ( 7) 00:07:59.719 35288.615 - 35490.265: 100.0000% ( 1) 00:07:59.719 00:07:59.719 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.719 ============================================================================== 00:07:59.719 Range in us Cumulative IO count 00:07:59.719 6377.157 - 6402.363: 0.0080% ( 1) 00:07:59.719 6452.775 - 6503.188: 0.0240% ( 2) 00:07:59.719 6503.188 - 6553.600: 0.0481% ( 3) 00:07:59.719 6553.600 - 6604.012: 0.1282% ( 10) 00:07:59.719 6604.012 - 6654.425: 0.2885% ( 20) 00:07:59.719 6654.425 - 6704.837: 0.6490% ( 45) 00:07:59.719 6704.837 - 6755.249: 1.3221% ( 84) 00:07:59.719 6755.249 - 6805.662: 1.4824% ( 20) 00:07:59.719 6805.662 - 6856.074: 1.8429% ( 45) 00:07:59.719 6856.074 - 6906.486: 2.0994% ( 32) 00:07:59.719 6906.486 - 6956.898: 2.3397% ( 30) 00:07:59.719 6956.898 - 7007.311: 3.1170% ( 97) 00:07:59.719 7007.311 - 7057.723: 3.4615% ( 43) 00:07:59.719 7057.723 - 7108.135: 3.6538% ( 24) 00:07:59.719 7108.135 - 7158.548: 3.8381% ( 23) 00:07:59.719 7158.548 - 7208.960: 4.2869% ( 56) 00:07:59.719 7208.960 - 7259.372: 4.5833% ( 37) 00:07:59.719 7259.372 - 7309.785: 4.8958% ( 39) 00:07:59.719 7309.785 - 7360.197: 5.2965% ( 50) 00:07:59.719 7360.197 - 7410.609: 5.6330% ( 42) 00:07:59.719 7410.609 - 7461.022: 6.0016% ( 46) 00:07:59.719 7461.022 - 7511.434: 6.4022% ( 50) 00:07:59.719 7511.434 - 7561.846: 7.1635% ( 95) 00:07:59.719 7561.846 - 7612.258: 8.0288% ( 108) 00:07:59.719 7612.258 - 7662.671: 8.7580% ( 91) 00:07:59.719 7662.671 - 7713.083: 9.3750% ( 77) 00:07:59.719 7713.083 - 7763.495: 10.0000% ( 78) 00:07:59.719 7763.495 - 7813.908: 10.4006% ( 50) 00:07:59.719 7813.908 - 7864.320: 10.8013% ( 50) 00:07:59.719 7864.320 - 7914.732: 11.0096% ( 26) 00:07:59.719 7914.732 - 7965.145: 11.2821% ( 34) 00:07:59.719 7965.145 - 8015.557: 11.6667% ( 48) 00:07:59.719 8015.557 - 8065.969: 11.9792% ( 39) 00:07:59.719 8065.969 - 8116.382: 12.3237% ( 43) 00:07:59.719 8116.382 - 8166.794: 12.8846% ( 70) 00:07:59.719 8166.794 - 8217.206: 13.3894% ( 63) 00:07:59.719 8217.206 - 8267.618: 13.8061% ( 52) 00:07:59.719 8267.618 - 8318.031: 14.1827% ( 47) 00:07:59.719 8318.031 - 8368.443: 14.3990% ( 27) 00:07:59.719 8368.443 - 8418.855: 14.5913% ( 24) 00:07:59.719 8418.855 - 8469.268: 14.7756% ( 23) 00:07:59.719 8469.268 - 8519.680: 15.1362% ( 45) 00:07:59.719 8519.680 - 8570.092: 15.6090% ( 59) 00:07:59.719 8570.092 - 8620.505: 16.2660% ( 82) 00:07:59.719 8620.505 - 8670.917: 16.6587% ( 49) 00:07:59.719 8670.917 - 8721.329: 17.2035% ( 68) 00:07:59.719 8721.329 - 8771.742: 17.7404% ( 67) 00:07:59.719 8771.742 - 8822.154: 18.6218% ( 110) 00:07:59.719 8822.154 - 8872.566: 19.4311% ( 101) 00:07:59.719 8872.566 - 8922.978: 20.6651% ( 154) 00:07:59.719 8922.978 - 8973.391: 21.7468% ( 135) 00:07:59.719 8973.391 - 9023.803: 22.6843% ( 117) 00:07:59.719 9023.803 - 9074.215: 23.3734% ( 86) 00:07:59.719 9074.215 - 9124.628: 24.0304% ( 82) 00:07:59.719 9124.628 - 9175.040: 24.7676% ( 92) 00:07:59.719 9175.040 - 9225.452: 25.6250% ( 107) 00:07:59.719 9225.452 - 9275.865: 26.2901% ( 83) 00:07:59.719 9275.865 - 9326.277: 27.0593% ( 96) 00:07:59.719 9326.277 - 9376.689: 27.8686% ( 101) 00:07:59.719 9376.689 - 9427.102: 28.9824% ( 139) 00:07:59.719 9427.102 - 9477.514: 29.8878% ( 113) 00:07:59.719 9477.514 - 9527.926: 30.8574% ( 121) 00:07:59.719 9527.926 - 9578.338: 32.2035% ( 168) 00:07:59.719 9578.338 - 9628.751: 33.5497% ( 168) 00:07:59.719 9628.751 - 9679.163: 34.7516% ( 150) 00:07:59.719 9679.163 - 9729.575: 36.0096% ( 157) 00:07:59.719 9729.575 - 9779.988: 37.4519% ( 180) 00:07:59.719 9779.988 - 9830.400: 38.8462% ( 174) 00:07:59.719 9830.400 - 9880.812: 40.1603% ( 164) 00:07:59.719 9880.812 - 9931.225: 41.5946% ( 179) 00:07:59.719 9931.225 - 9981.637: 42.9407% ( 168) 00:07:59.719 9981.637 - 10032.049: 44.2468% ( 163) 00:07:59.719 10032.049 - 10082.462: 45.4968% ( 156) 00:07:59.719 10082.462 - 10132.874: 46.9071% ( 176) 00:07:59.719 10132.874 - 10183.286: 48.6218% ( 214) 00:07:59.719 10183.286 - 10233.698: 50.5288% ( 238) 00:07:59.719 10233.698 - 10284.111: 52.2676% ( 217) 00:07:59.719 10284.111 - 10334.523: 54.0946% ( 228) 00:07:59.719 10334.523 - 10384.935: 55.7372% ( 205) 00:07:59.719 10384.935 - 10435.348: 57.5561% ( 227) 00:07:59.719 10435.348 - 10485.760: 59.5353% ( 247) 00:07:59.719 10485.760 - 10536.172: 61.7388% ( 275) 00:07:59.719 10536.172 - 10586.585: 63.3413% ( 200) 00:07:59.719 10586.585 - 10636.997: 65.0881% ( 218) 00:07:59.719 10636.997 - 10687.409: 66.7388% ( 206) 00:07:59.719 10687.409 - 10737.822: 68.3013% ( 195) 00:07:59.719 10737.822 - 10788.234: 70.0000% ( 212) 00:07:59.719 10788.234 - 10838.646: 71.5705% ( 196) 00:07:59.719 10838.646 - 10889.058: 73.0849% ( 189) 00:07:59.719 10889.058 - 10939.471: 74.4952% ( 176) 00:07:59.719 10939.471 - 10989.883: 75.8013% ( 163) 00:07:59.719 10989.883 - 11040.295: 76.7708% ( 121) 00:07:59.719 11040.295 - 11090.708: 77.5481% ( 97) 00:07:59.719 11090.708 - 11141.120: 78.3413% ( 99) 00:07:59.719 11141.120 - 11191.532: 79.2147% ( 109) 00:07:59.719 11191.532 - 11241.945: 79.9840% ( 96) 00:07:59.719 11241.945 - 11292.357: 80.8654% ( 110) 00:07:59.719 11292.357 - 11342.769: 81.7869% ( 115) 00:07:59.719 11342.769 - 11393.182: 82.6683% ( 110) 00:07:59.719 11393.182 - 11443.594: 83.4455% ( 97) 00:07:59.719 11443.594 - 11494.006: 84.1346% ( 86) 00:07:59.719 11494.006 - 11544.418: 84.8077% ( 84) 00:07:59.719 11544.418 - 11594.831: 85.5609% ( 94) 00:07:59.719 11594.831 - 11645.243: 86.6506% ( 136) 00:07:59.719 11645.243 - 11695.655: 87.3958% ( 93) 00:07:59.719 11695.655 - 11746.068: 88.0609% ( 83) 00:07:59.719 11746.068 - 11796.480: 88.5657% ( 63) 00:07:59.719 11796.480 - 11846.892: 88.9583% ( 49) 00:07:59.719 11846.892 - 11897.305: 89.3750% ( 52) 00:07:59.719 11897.305 - 11947.717: 89.8237% ( 56) 00:07:59.719 11947.717 - 11998.129: 90.1522% ( 41) 00:07:59.719 11998.129 - 12048.542: 90.4407% ( 36) 00:07:59.719 12048.542 - 12098.954: 90.7051% ( 33) 00:07:59.719 12098.954 - 12149.366: 91.0577% ( 44) 00:07:59.719 12149.366 - 12199.778: 91.2740% ( 27) 00:07:59.719 12199.778 - 12250.191: 91.5224% ( 31) 00:07:59.719 12250.191 - 12300.603: 91.7147% ( 24) 00:07:59.719 12300.603 - 12351.015: 91.8990% ( 23) 00:07:59.719 12351.015 - 12401.428: 92.1955% ( 37) 00:07:59.719 12401.428 - 12451.840: 92.3718% ( 22) 00:07:59.719 12451.840 - 12502.252: 92.5962% ( 28) 00:07:59.719 12502.252 - 12552.665: 92.8526% ( 32) 00:07:59.719 12552.665 - 12603.077: 93.2212% ( 46) 00:07:59.719 12603.077 - 12653.489: 93.5337% ( 39) 00:07:59.719 12653.489 - 12703.902: 93.8622% ( 41) 00:07:59.719 12703.902 - 12754.314: 94.2708% ( 51) 00:07:59.719 12754.314 - 12804.726: 94.5593% ( 36) 00:07:59.719 12804.726 - 12855.138: 94.8478% ( 36) 00:07:59.719 12855.138 - 12905.551: 95.1042% ( 32) 00:07:59.719 12905.551 - 13006.375: 95.4327% ( 41) 00:07:59.719 13006.375 - 13107.200: 95.6250% ( 24) 00:07:59.719 13107.200 - 13208.025: 95.7692% ( 18) 00:07:59.719 13208.025 - 13308.849: 95.8734% ( 13) 00:07:59.719 13308.849 - 13409.674: 96.0497% ( 22) 00:07:59.719 13409.674 - 13510.498: 96.2099% ( 20) 00:07:59.719 13510.498 - 13611.323: 96.3542% ( 18) 00:07:59.719 13611.323 - 13712.148: 96.5465% ( 24) 00:07:59.719 13712.148 - 13812.972: 96.8029% ( 32) 00:07:59.719 13812.972 - 13913.797: 97.0513% ( 31) 00:07:59.719 13913.797 - 14014.622: 97.2516% ( 25) 00:07:59.719 14014.622 - 14115.446: 97.4279% ( 22) 00:07:59.719 14115.446 - 14216.271: 97.5641% ( 17) 00:07:59.719 14216.271 - 14317.095: 97.6683% ( 13) 00:07:59.719 14317.095 - 14417.920: 97.7644% ( 12) 00:07:59.719 14417.920 - 14518.745: 97.8686% ( 13) 00:07:59.719 14518.745 - 14619.569: 97.9407% ( 9) 00:07:59.719 14619.569 - 14720.394: 98.0369% ( 12) 00:07:59.719 14720.394 - 14821.218: 98.1571% ( 15) 00:07:59.719 14821.218 - 14922.043: 98.3013% ( 18) 00:07:59.719 14922.043 - 15022.868: 98.6218% ( 40) 00:07:59.719 15022.868 - 15123.692: 98.7340% ( 14) 00:07:59.719 15123.692 - 15224.517: 98.8381% ( 13) 00:07:59.719 15224.517 - 15325.342: 98.8702% ( 4) 00:07:59.719 15325.342 - 15426.166: 98.9103% ( 5) 00:07:59.719 15426.166 - 15526.991: 98.9503% ( 5) 00:07:59.719 15526.991 - 15627.815: 98.9744% ( 3) 00:07:59.719 26012.751 - 26214.400: 98.9984% ( 3) 00:07:59.719 26214.400 - 26416.049: 99.0545% ( 7) 00:07:59.719 26416.049 - 26617.698: 99.1106% ( 7) 00:07:59.719 26617.698 - 26819.348: 99.1747% ( 8) 00:07:59.719 26819.348 - 27020.997: 99.2308% ( 7) 00:07:59.719 27020.997 - 27222.646: 99.2869% ( 7) 00:07:59.719 27222.646 - 27424.295: 99.3429% ( 7) 00:07:59.719 27424.295 - 27625.945: 99.3990% ( 7) 00:07:59.719 27625.945 - 27827.594: 99.4551% ( 7) 00:07:59.719 27827.594 - 28029.243: 99.4872% ( 4) 00:07:59.719 31457.280 - 31658.929: 99.5433% ( 7) 00:07:59.719 31658.929 - 31860.578: 99.6074% ( 8) 00:07:59.719 31860.578 - 32062.228: 99.6554% ( 6) 00:07:59.719 32062.228 - 32263.877: 99.7115% ( 7) 00:07:59.719 32263.877 - 32465.526: 99.7756% ( 8) 00:07:59.719 32465.526 - 32667.175: 99.8317% ( 7) 00:07:59.719 32667.175 - 32868.825: 99.8878% ( 7) 00:07:59.719 32868.825 - 33070.474: 99.9439% ( 7) 00:07:59.719 33070.474 - 33272.123: 100.0000% ( 7) 00:07:59.719 00:07:59.719 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.719 ============================================================================== 00:07:59.719 Range in us Cumulative IO count 00:07:59.719 6301.538 - 6326.745: 0.0080% ( 1) 00:07:59.719 6326.745 - 6351.951: 0.0160% ( 1) 00:07:59.720 6402.363 - 6427.569: 0.0240% ( 1) 00:07:59.720 6452.775 - 6503.188: 0.0321% ( 1) 00:07:59.720 6503.188 - 6553.600: 0.0801% ( 6) 00:07:59.720 6553.600 - 6604.012: 0.1603% ( 10) 00:07:59.720 6604.012 - 6654.425: 0.3045% ( 18) 00:07:59.720 6654.425 - 6704.837: 0.6811% ( 47) 00:07:59.720 6704.837 - 6755.249: 0.9135% ( 29) 00:07:59.720 6755.249 - 6805.662: 1.4022% ( 61) 00:07:59.720 6805.662 - 6856.074: 1.6346% ( 29) 00:07:59.720 6856.074 - 6906.486: 2.0112% ( 47) 00:07:59.720 6906.486 - 6956.898: 2.3317% ( 40) 00:07:59.720 6956.898 - 7007.311: 2.7404% ( 51) 00:07:59.720 7007.311 - 7057.723: 3.3333% ( 74) 00:07:59.720 7057.723 - 7108.135: 3.9263% ( 74) 00:07:59.720 7108.135 - 7158.548: 4.6154% ( 86) 00:07:59.720 7158.548 - 7208.960: 5.2885% ( 84) 00:07:59.720 7208.960 - 7259.372: 5.7372% ( 56) 00:07:59.720 7259.372 - 7309.785: 5.9615% ( 28) 00:07:59.720 7309.785 - 7360.197: 6.1378% ( 22) 00:07:59.720 7360.197 - 7410.609: 6.3061% ( 21) 00:07:59.720 7410.609 - 7461.022: 6.6987% ( 49) 00:07:59.720 7461.022 - 7511.434: 6.8750% ( 22) 00:07:59.720 7511.434 - 7561.846: 7.2115% ( 42) 00:07:59.720 7561.846 - 7612.258: 7.4920% ( 35) 00:07:59.720 7612.258 - 7662.671: 7.8686% ( 47) 00:07:59.720 7662.671 - 7713.083: 8.2692% ( 50) 00:07:59.720 7713.083 - 7763.495: 8.6939% ( 53) 00:07:59.720 7763.495 - 7813.908: 9.1587% ( 58) 00:07:59.720 7813.908 - 7864.320: 9.4712% ( 39) 00:07:59.720 7864.320 - 7914.732: 9.9119% ( 55) 00:07:59.720 7914.732 - 7965.145: 10.3205% ( 51) 00:07:59.720 7965.145 - 8015.557: 10.7292% ( 51) 00:07:59.720 8015.557 - 8065.969: 11.0337% ( 38) 00:07:59.720 8065.969 - 8116.382: 11.4103% ( 47) 00:07:59.720 8116.382 - 8166.794: 12.0593% ( 81) 00:07:59.720 8166.794 - 8217.206: 12.7404% ( 85) 00:07:59.720 8217.206 - 8267.618: 13.4135% ( 84) 00:07:59.720 8267.618 - 8318.031: 14.2228% ( 101) 00:07:59.720 8318.031 - 8368.443: 14.5673% ( 43) 00:07:59.720 8368.443 - 8418.855: 14.9279% ( 45) 00:07:59.720 8418.855 - 8469.268: 15.2804% ( 44) 00:07:59.720 8469.268 - 8519.680: 15.7853% ( 63) 00:07:59.720 8519.680 - 8570.092: 16.2340% ( 56) 00:07:59.720 8570.092 - 8620.505: 16.7708% ( 67) 00:07:59.720 8620.505 - 8670.917: 17.3878% ( 77) 00:07:59.720 8670.917 - 8721.329: 17.9006% ( 64) 00:07:59.720 8721.329 - 8771.742: 18.5497% ( 81) 00:07:59.720 8771.742 - 8822.154: 19.2869% ( 92) 00:07:59.720 8822.154 - 8872.566: 20.2885% ( 125) 00:07:59.720 8872.566 - 8922.978: 20.9776% ( 86) 00:07:59.720 8922.978 - 8973.391: 22.0112% ( 129) 00:07:59.720 8973.391 - 9023.803: 22.7244% ( 89) 00:07:59.720 9023.803 - 9074.215: 23.5096% ( 98) 00:07:59.720 9074.215 - 9124.628: 24.4231% ( 114) 00:07:59.720 9124.628 - 9175.040: 25.2163% ( 99) 00:07:59.720 9175.040 - 9225.452: 25.9615% ( 93) 00:07:59.720 9225.452 - 9275.865: 26.8990% ( 117) 00:07:59.720 9275.865 - 9326.277: 27.6923% ( 99) 00:07:59.720 9326.277 - 9376.689: 28.7660% ( 134) 00:07:59.720 9376.689 - 9427.102: 29.4952% ( 91) 00:07:59.720 9427.102 - 9477.514: 30.4728% ( 122) 00:07:59.720 9477.514 - 9527.926: 31.4263% ( 119) 00:07:59.720 9527.926 - 9578.338: 32.3397% ( 114) 00:07:59.720 9578.338 - 9628.751: 33.3494% ( 126) 00:07:59.720 9628.751 - 9679.163: 34.6554% ( 163) 00:07:59.720 9679.163 - 9729.575: 36.0737% ( 177) 00:07:59.720 9729.575 - 9779.988: 37.3237% ( 156) 00:07:59.720 9779.988 - 9830.400: 38.5577% ( 154) 00:07:59.720 9830.400 - 9880.812: 39.7837% ( 153) 00:07:59.720 9880.812 - 9931.225: 41.0096% ( 153) 00:07:59.720 9931.225 - 9981.637: 42.3237% ( 164) 00:07:59.720 9981.637 - 10032.049: 44.1346% ( 226) 00:07:59.720 10032.049 - 10082.462: 45.4888% ( 169) 00:07:59.720 10082.462 - 10132.874: 46.7949% ( 163) 00:07:59.720 10132.874 - 10183.286: 48.3734% ( 197) 00:07:59.720 10183.286 - 10233.698: 50.1683% ( 224) 00:07:59.720 10233.698 - 10284.111: 51.7869% ( 202) 00:07:59.720 10284.111 - 10334.523: 53.5897% ( 225) 00:07:59.720 10334.523 - 10384.935: 55.7612% ( 271) 00:07:59.720 10384.935 - 10435.348: 57.9808% ( 277) 00:07:59.720 10435.348 - 10485.760: 59.9920% ( 251) 00:07:59.720 10485.760 - 10536.172: 62.1234% ( 266) 00:07:59.720 10536.172 - 10586.585: 63.9183% ( 224) 00:07:59.720 10586.585 - 10636.997: 65.6090% ( 211) 00:07:59.720 10636.997 - 10687.409: 67.2756% ( 208) 00:07:59.720 10687.409 - 10737.822: 69.0224% ( 218) 00:07:59.720 10737.822 - 10788.234: 70.3045% ( 160) 00:07:59.720 10788.234 - 10838.646: 71.6266% ( 165) 00:07:59.720 10838.646 - 10889.058: 72.9728% ( 168) 00:07:59.720 10889.058 - 10939.471: 74.1426% ( 146) 00:07:59.720 10939.471 - 10989.883: 75.3686% ( 153) 00:07:59.720 10989.883 - 11040.295: 76.4183% ( 131) 00:07:59.720 11040.295 - 11090.708: 77.5401% ( 140) 00:07:59.720 11090.708 - 11141.120: 78.5337% ( 124) 00:07:59.720 11141.120 - 11191.532: 79.9279% ( 174) 00:07:59.720 11191.532 - 11241.945: 81.1619% ( 154) 00:07:59.720 11241.945 - 11292.357: 82.1394% ( 122) 00:07:59.720 11292.357 - 11342.769: 82.8526% ( 89) 00:07:59.720 11342.769 - 11393.182: 83.4215% ( 71) 00:07:59.720 11393.182 - 11443.594: 84.1026% ( 85) 00:07:59.720 11443.594 - 11494.006: 84.6394% ( 67) 00:07:59.720 11494.006 - 11544.418: 85.1202% ( 60) 00:07:59.720 11544.418 - 11594.831: 85.7612% ( 80) 00:07:59.720 11594.831 - 11645.243: 86.4022% ( 80) 00:07:59.720 11645.243 - 11695.655: 87.1554% ( 94) 00:07:59.720 11695.655 - 11746.068: 87.7083% ( 69) 00:07:59.720 11746.068 - 11796.480: 88.1971% ( 61) 00:07:59.720 11796.480 - 11846.892: 88.6538% ( 57) 00:07:59.720 11846.892 - 11897.305: 89.4471% ( 99) 00:07:59.720 11897.305 - 11947.717: 89.9119% ( 58) 00:07:59.720 11947.717 - 11998.129: 90.3846% ( 59) 00:07:59.720 11998.129 - 12048.542: 90.8013% ( 52) 00:07:59.720 12048.542 - 12098.954: 91.2340% ( 54) 00:07:59.720 12098.954 - 12149.366: 91.5385% ( 38) 00:07:59.720 12149.366 - 12199.778: 91.7708% ( 29) 00:07:59.720 12199.778 - 12250.191: 92.0353% ( 33) 00:07:59.720 12250.191 - 12300.603: 92.2756% ( 30) 00:07:59.720 12300.603 - 12351.015: 92.5160% ( 30) 00:07:59.720 12351.015 - 12401.428: 92.7484% ( 29) 00:07:59.720 12401.428 - 12451.840: 93.0208% ( 34) 00:07:59.720 12451.840 - 12502.252: 93.2532% ( 29) 00:07:59.720 12502.252 - 12552.665: 93.5337% ( 35) 00:07:59.720 12552.665 - 12603.077: 93.7981% ( 33) 00:07:59.720 12603.077 - 12653.489: 94.0064% ( 26) 00:07:59.720 12653.489 - 12703.902: 94.1907% ( 23) 00:07:59.720 12703.902 - 12754.314: 94.3670% ( 22) 00:07:59.720 12754.314 - 12804.726: 94.5353% ( 21) 00:07:59.720 12804.726 - 12855.138: 94.6635% ( 16) 00:07:59.720 12855.138 - 12905.551: 94.7676% ( 13) 00:07:59.720 12905.551 - 13006.375: 94.9760% ( 26) 00:07:59.720 13006.375 - 13107.200: 95.2564% ( 35) 00:07:59.720 13107.200 - 13208.025: 95.4647% ( 26) 00:07:59.720 13208.025 - 13308.849: 95.6250% ( 20) 00:07:59.720 13308.849 - 13409.674: 95.8974% ( 34) 00:07:59.720 13409.674 - 13510.498: 96.0817% ( 23) 00:07:59.720 13510.498 - 13611.323: 96.2901% ( 26) 00:07:59.720 13611.323 - 13712.148: 96.5224% ( 29) 00:07:59.720 13712.148 - 13812.972: 96.6426% ( 15) 00:07:59.720 13812.972 - 13913.797: 96.8029% ( 20) 00:07:59.720 13913.797 - 14014.622: 96.9471% ( 18) 00:07:59.720 14014.622 - 14115.446: 97.1554% ( 26) 00:07:59.720 14115.446 - 14216.271: 97.3317% ( 22) 00:07:59.720 14216.271 - 14317.095: 97.6042% ( 34) 00:07:59.720 14317.095 - 14417.920: 97.7404% ( 17) 00:07:59.720 14417.920 - 14518.745: 97.8606% ( 15) 00:07:59.720 14518.745 - 14619.569: 97.9247% ( 8) 00:07:59.720 14619.569 - 14720.394: 98.0048% ( 10) 00:07:59.720 14720.394 - 14821.218: 98.0769% ( 9) 00:07:59.720 14821.218 - 14922.043: 98.1731% ( 12) 00:07:59.720 14922.043 - 15022.868: 98.2853% ( 14) 00:07:59.720 15022.868 - 15123.692: 98.3814% ( 12) 00:07:59.720 15123.692 - 15224.517: 98.5737% ( 24) 00:07:59.720 15224.517 - 15325.342: 98.6538% ( 10) 00:07:59.720 15325.342 - 15426.166: 98.7340% ( 10) 00:07:59.720 15426.166 - 15526.991: 98.8141% ( 10) 00:07:59.720 15526.991 - 15627.815: 98.8862% ( 9) 00:07:59.720 15627.815 - 15728.640: 98.8942% ( 1) 00:07:59.720 15930.289 - 16031.114: 98.9663% ( 9) 00:07:59.720 16031.114 - 16131.938: 98.9744% ( 1) 00:07:59.720 24298.732 - 24399.557: 98.9904% ( 2) 00:07:59.720 24399.557 - 24500.382: 99.0064% ( 2) 00:07:59.720 24500.382 - 24601.206: 99.0304% ( 3) 00:07:59.720 24601.206 - 24702.031: 99.0545% ( 3) 00:07:59.720 24702.031 - 24802.855: 99.0705% ( 2) 00:07:59.720 24802.855 - 24903.680: 99.0946% ( 3) 00:07:59.720 24903.680 - 25004.505: 99.1186% ( 3) 00:07:59.720 25004.505 - 25105.329: 99.1346% ( 2) 00:07:59.720 25105.329 - 25206.154: 99.1587% ( 3) 00:07:59.720 25206.154 - 25306.978: 99.1827% ( 3) 00:07:59.720 25306.978 - 25407.803: 99.1987% ( 2) 00:07:59.720 25407.803 - 25508.628: 99.2228% ( 3) 00:07:59.720 25508.628 - 25609.452: 99.2468% ( 3) 00:07:59.720 25609.452 - 25710.277: 99.2788% ( 4) 00:07:59.720 25710.277 - 25811.102: 99.3109% ( 4) 00:07:59.720 25811.102 - 26012.751: 99.3670% ( 7) 00:07:59.720 26012.751 - 26214.400: 99.4231% ( 7) 00:07:59.720 26214.400 - 26416.049: 99.4792% ( 7) 00:07:59.720 26416.049 - 26617.698: 99.4872% ( 1) 00:07:59.720 29844.086 - 30045.735: 99.5112% ( 3) 00:07:59.720 30045.735 - 30247.385: 99.5513% ( 5) 00:07:59.720 30247.385 - 30449.034: 99.5913% ( 5) 00:07:59.720 30449.034 - 30650.683: 99.6474% ( 7) 00:07:59.720 30650.683 - 30852.332: 99.7035% ( 7) 00:07:59.720 30852.332 - 31053.982: 99.7516% ( 6) 00:07:59.720 31053.982 - 31255.631: 99.8157% ( 8) 00:07:59.720 31255.631 - 31457.280: 99.8558% ( 5) 00:07:59.720 31457.280 - 31658.929: 99.9199% ( 8) 00:07:59.720 31658.929 - 31860.578: 99.9760% ( 7) 00:07:59.720 31860.578 - 32062.228: 100.0000% ( 3) 00:07:59.720 00:07:59.720 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.720 ============================================================================== 00:07:59.720 Range in us Cumulative IO count 00:07:59.720 6276.332 - 6301.538: 0.0080% ( 1) 00:07:59.720 6427.569 - 6452.775: 0.0240% ( 2) 00:07:59.720 6452.775 - 6503.188: 0.0721% ( 6) 00:07:59.720 6503.188 - 6553.600: 0.1442% ( 9) 00:07:59.720 6553.600 - 6604.012: 0.2564% ( 14) 00:07:59.720 6604.012 - 6654.425: 0.5048% ( 31) 00:07:59.720 6654.425 - 6704.837: 0.6811% ( 22) 00:07:59.720 6704.837 - 6755.249: 0.8974% ( 27) 00:07:59.720 6755.249 - 6805.662: 1.3061% ( 51) 00:07:59.720 6805.662 - 6856.074: 1.4984% ( 24) 00:07:59.720 6856.074 - 6906.486: 1.7949% ( 37) 00:07:59.720 6906.486 - 6956.898: 2.0513% ( 32) 00:07:59.720 6956.898 - 7007.311: 2.6522% ( 75) 00:07:59.720 7007.311 - 7057.723: 3.2772% ( 78) 00:07:59.720 7057.723 - 7108.135: 3.9904% ( 89) 00:07:59.720 7108.135 - 7158.548: 4.6394% ( 81) 00:07:59.720 7158.548 - 7208.960: 5.1202% ( 60) 00:07:59.720 7208.960 - 7259.372: 5.3766% ( 32) 00:07:59.720 7259.372 - 7309.785: 5.5208% ( 18) 00:07:59.720 7309.785 - 7360.197: 5.6250% ( 13) 00:07:59.720 7360.197 - 7410.609: 5.8253% ( 25) 00:07:59.720 7410.609 - 7461.022: 6.0256% ( 25) 00:07:59.720 7461.022 - 7511.434: 6.2660% ( 30) 00:07:59.720 7511.434 - 7561.846: 6.9872% ( 90) 00:07:59.720 7561.846 - 7612.258: 7.4038% ( 52) 00:07:59.720 7612.258 - 7662.671: 7.9567% ( 69) 00:07:59.720 7662.671 - 7713.083: 8.7260% ( 96) 00:07:59.720 7713.083 - 7763.495: 9.3990% ( 84) 00:07:59.720 7763.495 - 7813.908: 9.9439% ( 68) 00:07:59.720 7813.908 - 7864.320: 10.3125% ( 46) 00:07:59.720 7864.320 - 7914.732: 10.9135% ( 75) 00:07:59.720 7914.732 - 7965.145: 11.4103% ( 62) 00:07:59.720 7965.145 - 8015.557: 11.6106% ( 25) 00:07:59.720 8015.557 - 8065.969: 11.8189% ( 26) 00:07:59.720 8065.969 - 8116.382: 12.0593% ( 30) 00:07:59.720 8116.382 - 8166.794: 12.5000% ( 55) 00:07:59.720 8166.794 - 8217.206: 12.9567% ( 57) 00:07:59.720 8217.206 - 8267.618: 13.2532% ( 37) 00:07:59.720 8267.618 - 8318.031: 13.6298% ( 47) 00:07:59.720 8318.031 - 8368.443: 13.9904% ( 45) 00:07:59.720 8368.443 - 8418.855: 14.2548% ( 33) 00:07:59.720 8418.855 - 8469.268: 14.8157% ( 70) 00:07:59.720 8469.268 - 8519.680: 15.2083% ( 49) 00:07:59.720 8519.680 - 8570.092: 15.6410% ( 54) 00:07:59.720 8570.092 - 8620.505: 16.0897% ( 56) 00:07:59.720 8620.505 - 8670.917: 16.7708% ( 85) 00:07:59.720 8670.917 - 8721.329: 17.7003% ( 116) 00:07:59.720 8721.329 - 8771.742: 18.4135% ( 89) 00:07:59.720 8771.742 - 8822.154: 19.3269% ( 114) 00:07:59.720 8822.154 - 8872.566: 20.6651% ( 167) 00:07:59.720 8872.566 - 8922.978: 21.5865% ( 115) 00:07:59.720 8922.978 - 8973.391: 22.4038% ( 102) 00:07:59.720 8973.391 - 9023.803: 23.1250% ( 90) 00:07:59.720 9023.803 - 9074.215: 23.8381% ( 89) 00:07:59.720 9074.215 - 9124.628: 24.5272% ( 86) 00:07:59.720 9124.628 - 9175.040: 25.1522% ( 78) 00:07:59.720 9175.040 - 9225.452: 25.6971% ( 68) 00:07:59.720 9225.452 - 9275.865: 26.2901% ( 74) 00:07:59.720 9275.865 - 9326.277: 26.9071% ( 77) 00:07:59.720 9326.277 - 9376.689: 27.7724% ( 108) 00:07:59.720 9376.689 - 9427.102: 28.6138% ( 105) 00:07:59.720 9427.102 - 9477.514: 29.6074% ( 124) 00:07:59.720 9477.514 - 9527.926: 30.6731% ( 133) 00:07:59.720 9527.926 - 9578.338: 31.7468% ( 134) 00:07:59.720 9578.338 - 9628.751: 32.7564% ( 126) 00:07:59.720 9628.751 - 9679.163: 33.9343% ( 147) 00:07:59.720 9679.163 - 9729.575: 35.1603% ( 153) 00:07:59.720 9729.575 - 9779.988: 36.4663% ( 163) 00:07:59.720 9779.988 - 9830.400: 37.9087% ( 180) 00:07:59.720 9830.400 - 9880.812: 39.4071% ( 187) 00:07:59.720 9880.812 - 9931.225: 40.7933% ( 173) 00:07:59.720 9931.225 - 9981.637: 42.4760% ( 210) 00:07:59.720 9981.637 - 10032.049: 44.2628% ( 223) 00:07:59.720 10032.049 - 10082.462: 45.9776% ( 214) 00:07:59.720 10082.462 - 10132.874: 47.3638% ( 173) 00:07:59.720 10132.874 - 10183.286: 48.9103% ( 193) 00:07:59.720 10183.286 - 10233.698: 50.6250% ( 214) 00:07:59.720 10233.698 - 10284.111: 52.7003% ( 259) 00:07:59.720 10284.111 - 10334.523: 54.4151% ( 214) 00:07:59.720 10334.523 - 10384.935: 56.4022% ( 248) 00:07:59.720 10384.935 - 10435.348: 58.1330% ( 216) 00:07:59.721 10435.348 - 10485.760: 60.1042% ( 246) 00:07:59.721 10485.760 - 10536.172: 61.9551% ( 231) 00:07:59.721 10536.172 - 10586.585: 63.5176% ( 195) 00:07:59.721 10586.585 - 10636.997: 64.9760% ( 182) 00:07:59.721 10636.997 - 10687.409: 66.6747% ( 212) 00:07:59.721 10687.409 - 10737.822: 68.0689% ( 174) 00:07:59.721 10737.822 - 10788.234: 69.6074% ( 192) 00:07:59.721 10788.234 - 10838.646: 71.3542% ( 218) 00:07:59.721 10838.646 - 10889.058: 72.8606% ( 188) 00:07:59.721 10889.058 - 10939.471: 74.1346% ( 159) 00:07:59.721 10939.471 - 10989.883: 75.2804% ( 143) 00:07:59.721 10989.883 - 11040.295: 76.6587% ( 172) 00:07:59.721 11040.295 - 11090.708: 77.5721% ( 114) 00:07:59.721 11090.708 - 11141.120: 78.5737% ( 125) 00:07:59.721 11141.120 - 11191.532: 79.3510% ( 97) 00:07:59.721 11191.532 - 11241.945: 80.2083% ( 107) 00:07:59.721 11241.945 - 11292.357: 81.2099% ( 125) 00:07:59.721 11292.357 - 11342.769: 82.0513% ( 105) 00:07:59.721 11342.769 - 11393.182: 82.7083% ( 82) 00:07:59.721 11393.182 - 11443.594: 84.0946% ( 173) 00:07:59.721 11443.594 - 11494.006: 85.2644% ( 146) 00:07:59.721 11494.006 - 11544.418: 85.9856% ( 90) 00:07:59.721 11544.418 - 11594.831: 86.6907% ( 88) 00:07:59.721 11594.831 - 11645.243: 87.2276% ( 67) 00:07:59.721 11645.243 - 11695.655: 87.7564% ( 66) 00:07:59.721 11695.655 - 11746.068: 88.3173% ( 70) 00:07:59.721 11746.068 - 11796.480: 88.7580% ( 55) 00:07:59.721 11796.480 - 11846.892: 89.2628% ( 63) 00:07:59.721 11846.892 - 11897.305: 89.6875% ( 53) 00:07:59.721 11897.305 - 11947.717: 90.2644% ( 72) 00:07:59.721 11947.717 - 11998.129: 90.6891% ( 53) 00:07:59.721 11998.129 - 12048.542: 91.1859% ( 62) 00:07:59.721 12048.542 - 12098.954: 91.6106% ( 53) 00:07:59.721 12098.954 - 12149.366: 92.1234% ( 64) 00:07:59.721 12149.366 - 12199.778: 92.5401% ( 52) 00:07:59.721 12199.778 - 12250.191: 92.8926% ( 44) 00:07:59.721 12250.191 - 12300.603: 93.2131% ( 40) 00:07:59.721 12300.603 - 12351.015: 93.5176% ( 38) 00:07:59.721 12351.015 - 12401.428: 93.8301% ( 39) 00:07:59.721 12401.428 - 12451.840: 94.0545% ( 28) 00:07:59.721 12451.840 - 12502.252: 94.2228% ( 21) 00:07:59.721 12502.252 - 12552.665: 94.4391% ( 27) 00:07:59.721 12552.665 - 12603.077: 94.5433% ( 13) 00:07:59.721 12603.077 - 12653.489: 94.6394% ( 12) 00:07:59.721 12653.489 - 12703.902: 94.7035% ( 8) 00:07:59.721 12703.902 - 12754.314: 94.7436% ( 5) 00:07:59.721 12754.314 - 12804.726: 94.7837% ( 5) 00:07:59.721 12804.726 - 12855.138: 94.8237% ( 5) 00:07:59.721 12855.138 - 12905.551: 94.9038% ( 10) 00:07:59.721 12905.551 - 13006.375: 95.0240% ( 15) 00:07:59.721 13006.375 - 13107.200: 95.1042% ( 10) 00:07:59.721 13107.200 - 13208.025: 95.1683% ( 8) 00:07:59.721 13208.025 - 13308.849: 95.2484% ( 10) 00:07:59.721 13308.849 - 13409.674: 95.3766% ( 16) 00:07:59.721 13409.674 - 13510.498: 95.6010% ( 28) 00:07:59.721 13510.498 - 13611.323: 95.9135% ( 39) 00:07:59.721 13611.323 - 13712.148: 96.1779% ( 33) 00:07:59.721 13712.148 - 13812.972: 96.3862% ( 26) 00:07:59.721 13812.972 - 13913.797: 96.5785% ( 24) 00:07:59.721 13913.797 - 14014.622: 96.7949% ( 27) 00:07:59.721 14014.622 - 14115.446: 97.0112% ( 27) 00:07:59.721 14115.446 - 14216.271: 97.1554% ( 18) 00:07:59.721 14216.271 - 14317.095: 97.3317% ( 22) 00:07:59.721 14317.095 - 14417.920: 97.5481% ( 27) 00:07:59.721 14417.920 - 14518.745: 97.7404% ( 24) 00:07:59.721 14518.745 - 14619.569: 97.8846% ( 18) 00:07:59.721 14619.569 - 14720.394: 98.0128% ( 16) 00:07:59.721 14720.394 - 14821.218: 98.1090% ( 12) 00:07:59.721 14821.218 - 14922.043: 98.1971% ( 11) 00:07:59.721 14922.043 - 15022.868: 98.2772% ( 10) 00:07:59.721 15022.868 - 15123.692: 98.3253% ( 6) 00:07:59.721 15123.692 - 15224.517: 98.3654% ( 5) 00:07:59.721 15224.517 - 15325.342: 98.4615% ( 12) 00:07:59.721 15325.342 - 15426.166: 98.5897% ( 16) 00:07:59.721 15426.166 - 15526.991: 98.6298% ( 5) 00:07:59.721 15526.991 - 15627.815: 98.6779% ( 6) 00:07:59.721 15627.815 - 15728.640: 98.7340% ( 7) 00:07:59.721 15728.640 - 15829.465: 98.7821% ( 6) 00:07:59.721 15829.465 - 15930.289: 98.8301% ( 6) 00:07:59.721 15930.289 - 16031.114: 98.8782% ( 6) 00:07:59.721 16031.114 - 16131.938: 98.9343% ( 7) 00:07:59.721 16131.938 - 16232.763: 98.9744% ( 5) 00:07:59.721 22584.714 - 22685.538: 98.9824% ( 1) 00:07:59.721 22685.538 - 22786.363: 99.0064% ( 3) 00:07:59.721 22786.363 - 22887.188: 99.0385% ( 4) 00:07:59.721 22887.188 - 22988.012: 99.0625% ( 3) 00:07:59.721 22988.012 - 23088.837: 99.0946% ( 4) 00:07:59.721 23088.837 - 23189.662: 99.1266% ( 4) 00:07:59.721 23189.662 - 23290.486: 99.1587% ( 4) 00:07:59.721 23290.486 - 23391.311: 99.1827% ( 3) 00:07:59.721 23391.311 - 23492.135: 99.2147% ( 4) 00:07:59.721 23492.135 - 23592.960: 99.2468% ( 4) 00:07:59.721 23592.960 - 23693.785: 99.2788% ( 4) 00:07:59.721 23693.785 - 23794.609: 99.3029% ( 3) 00:07:59.721 23794.609 - 23895.434: 99.3349% ( 4) 00:07:59.721 23895.434 - 23996.258: 99.3670% ( 4) 00:07:59.721 23996.258 - 24097.083: 99.3910% ( 3) 00:07:59.721 24097.083 - 24197.908: 99.4231% ( 4) 00:07:59.721 24197.908 - 24298.732: 99.4551% ( 4) 00:07:59.721 24298.732 - 24399.557: 99.4872% ( 4) 00:07:59.721 28029.243 - 28230.892: 99.5353% ( 6) 00:07:59.721 28230.892 - 28432.542: 99.5913% ( 7) 00:07:59.721 28432.542 - 28634.191: 99.6394% ( 6) 00:07:59.721 28634.191 - 28835.840: 99.7035% ( 8) 00:07:59.721 28835.840 - 29037.489: 99.7596% ( 7) 00:07:59.721 29037.489 - 29239.138: 99.8237% ( 8) 00:07:59.721 29239.138 - 29440.788: 99.8798% ( 7) 00:07:59.721 29440.788 - 29642.437: 99.9439% ( 8) 00:07:59.721 29642.437 - 29844.086: 100.0000% ( 7) 00:07:59.721 00:07:59.721 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.721 ============================================================================== 00:07:59.721 Range in us Cumulative IO count 00:07:59.721 6402.363 - 6427.569: 0.0080% ( 1) 00:07:59.721 6452.775 - 6503.188: 0.0240% ( 2) 00:07:59.721 6503.188 - 6553.600: 0.0881% ( 8) 00:07:59.721 6553.600 - 6604.012: 0.1923% ( 13) 00:07:59.721 6604.012 - 6654.425: 0.3926% ( 25) 00:07:59.721 6654.425 - 6704.837: 0.6971% ( 38) 00:07:59.721 6704.837 - 6755.249: 1.1458% ( 56) 00:07:59.721 6755.249 - 6805.662: 1.3301% ( 23) 00:07:59.721 6805.662 - 6856.074: 1.5946% ( 33) 00:07:59.721 6856.074 - 6906.486: 2.1234% ( 66) 00:07:59.721 6906.486 - 6956.898: 2.4199% ( 37) 00:07:59.721 6956.898 - 7007.311: 3.0529% ( 79) 00:07:59.721 7007.311 - 7057.723: 3.6058% ( 69) 00:07:59.721 7057.723 - 7108.135: 4.0865% ( 60) 00:07:59.721 7108.135 - 7158.548: 4.4231% ( 42) 00:07:59.721 7158.548 - 7208.960: 4.9359% ( 64) 00:07:59.721 7208.960 - 7259.372: 5.2885% ( 44) 00:07:59.721 7259.372 - 7309.785: 5.5288% ( 30) 00:07:59.721 7309.785 - 7360.197: 5.7532% ( 28) 00:07:59.721 7360.197 - 7410.609: 6.0817% ( 41) 00:07:59.721 7410.609 - 7461.022: 6.3061% ( 28) 00:07:59.721 7461.022 - 7511.434: 6.5545% ( 31) 00:07:59.721 7511.434 - 7561.846: 6.9551% ( 50) 00:07:59.721 7561.846 - 7612.258: 7.6202% ( 83) 00:07:59.721 7612.258 - 7662.671: 8.1971% ( 72) 00:07:59.721 7662.671 - 7713.083: 8.7580% ( 70) 00:07:59.721 7713.083 - 7763.495: 9.3750% ( 77) 00:07:59.721 7763.495 - 7813.908: 10.3285% ( 119) 00:07:59.721 7813.908 - 7864.320: 10.7853% ( 57) 00:07:59.721 7864.320 - 7914.732: 11.0978% ( 39) 00:07:59.721 7914.732 - 7965.145: 11.4744% ( 47) 00:07:59.721 7965.145 - 8015.557: 11.6667% ( 24) 00:07:59.721 8015.557 - 8065.969: 11.8510% ( 23) 00:07:59.721 8065.969 - 8116.382: 12.1955% ( 43) 00:07:59.721 8116.382 - 8166.794: 12.4679% ( 34) 00:07:59.721 8166.794 - 8217.206: 12.7885% ( 40) 00:07:59.721 8217.206 - 8267.618: 13.1891% ( 50) 00:07:59.721 8267.618 - 8318.031: 13.6218% ( 54) 00:07:59.721 8318.031 - 8368.443: 13.8301% ( 26) 00:07:59.721 8368.443 - 8418.855: 13.9984% ( 21) 00:07:59.721 8418.855 - 8469.268: 14.2468% ( 31) 00:07:59.721 8469.268 - 8519.680: 14.4792% ( 29) 00:07:59.721 8519.680 - 8570.092: 14.7516% ( 34) 00:07:59.721 8570.092 - 8620.505: 15.2564% ( 63) 00:07:59.721 8620.505 - 8670.917: 15.8574% ( 75) 00:07:59.721 8670.917 - 8721.329: 16.6747% ( 102) 00:07:59.721 8721.329 - 8771.742: 17.8446% ( 146) 00:07:59.721 8771.742 - 8822.154: 18.7260% ( 110) 00:07:59.721 8822.154 - 8872.566: 19.7115% ( 123) 00:07:59.721 8872.566 - 8922.978: 20.4407% ( 91) 00:07:59.721 8922.978 - 8973.391: 21.3141% ( 109) 00:07:59.721 8973.391 - 9023.803: 22.0272% ( 89) 00:07:59.721 9023.803 - 9074.215: 23.0048% ( 122) 00:07:59.721 9074.215 - 9124.628: 24.1667% ( 145) 00:07:59.721 9124.628 - 9175.040: 24.9760% ( 101) 00:07:59.721 9175.040 - 9225.452: 25.6170% ( 80) 00:07:59.721 9225.452 - 9275.865: 26.4423% ( 103) 00:07:59.721 9275.865 - 9326.277: 27.2356% ( 99) 00:07:59.721 9326.277 - 9376.689: 28.0689% ( 104) 00:07:59.721 9376.689 - 9427.102: 29.0465% ( 122) 00:07:59.721 9427.102 - 9477.514: 30.0641% ( 127) 00:07:59.721 9477.514 - 9527.926: 31.5625% ( 187) 00:07:59.721 9527.926 - 9578.338: 32.6522% ( 136) 00:07:59.721 9578.338 - 9628.751: 33.7580% ( 138) 00:07:59.721 9628.751 - 9679.163: 34.7436% ( 123) 00:07:59.721 9679.163 - 9729.575: 35.8173% ( 134) 00:07:59.721 9729.575 - 9779.988: 36.8910% ( 134) 00:07:59.721 9779.988 - 9830.400: 38.1250% ( 154) 00:07:59.721 9830.400 - 9880.812: 39.2308% ( 138) 00:07:59.721 9880.812 - 9931.225: 40.2724% ( 130) 00:07:59.721 9931.225 - 9981.637: 41.4904% ( 152) 00:07:59.721 9981.637 - 10032.049: 42.9487% ( 182) 00:07:59.721 10032.049 - 10082.462: 44.2949% ( 168) 00:07:59.721 10082.462 - 10132.874: 45.9696% ( 209) 00:07:59.721 10132.874 - 10183.286: 47.4840% ( 189) 00:07:59.721 10183.286 - 10233.698: 48.7260% ( 155) 00:07:59.721 10233.698 - 10284.111: 50.2484% ( 190) 00:07:59.721 10284.111 - 10334.523: 52.4439% ( 274) 00:07:59.721 10334.523 - 10384.935: 54.4151% ( 246) 00:07:59.721 10384.935 - 10435.348: 56.5224% ( 263) 00:07:59.721 10435.348 - 10485.760: 58.3894% ( 233) 00:07:59.721 10485.760 - 10536.172: 60.6731% ( 285) 00:07:59.721 10536.172 - 10586.585: 63.1571% ( 310) 00:07:59.721 10586.585 - 10636.997: 65.3606% ( 275) 00:07:59.721 10636.997 - 10687.409: 67.1394% ( 222) 00:07:59.721 10687.409 - 10737.822: 69.1106% ( 246) 00:07:59.721 10737.822 - 10788.234: 70.9135% ( 225) 00:07:59.721 10788.234 - 10838.646: 72.4519% ( 192) 00:07:59.721 10838.646 - 10889.058: 73.7019% ( 156) 00:07:59.721 10889.058 - 10939.471: 74.7997% ( 137) 00:07:59.721 10939.471 - 10989.883: 75.9135% ( 139) 00:07:59.721 10989.883 - 11040.295: 76.7628% ( 106) 00:07:59.721 11040.295 - 11090.708: 77.5801% ( 102) 00:07:59.721 11090.708 - 11141.120: 78.3173% ( 92) 00:07:59.721 11141.120 - 11191.532: 79.2869% ( 121) 00:07:59.721 11191.532 - 11241.945: 80.2804% ( 124) 00:07:59.721 11241.945 - 11292.357: 81.1779% ( 112) 00:07:59.721 11292.357 - 11342.769: 82.2837% ( 138) 00:07:59.721 11342.769 - 11393.182: 83.5417% ( 157) 00:07:59.721 11393.182 - 11443.594: 84.3990% ( 107) 00:07:59.721 11443.594 - 11494.006: 85.3766% ( 122) 00:07:59.721 11494.006 - 11544.418: 86.7788% ( 175) 00:07:59.721 11544.418 - 11594.831: 87.5080% ( 91) 00:07:59.721 11594.831 - 11645.243: 88.0689% ( 70) 00:07:59.721 11645.243 - 11695.655: 88.5978% ( 66) 00:07:59.721 11695.655 - 11746.068: 89.2548% ( 82) 00:07:59.721 11746.068 - 11796.480: 89.8237% ( 71) 00:07:59.721 11796.480 - 11846.892: 90.2644% ( 55) 00:07:59.721 11846.892 - 11897.305: 90.6891% ( 53) 00:07:59.721 11897.305 - 11947.717: 91.3141% ( 78) 00:07:59.721 11947.717 - 11998.129: 91.6506% ( 42) 00:07:59.721 11998.129 - 12048.542: 91.9551% ( 38) 00:07:59.721 12048.542 - 12098.954: 92.1635% ( 26) 00:07:59.721 12098.954 - 12149.366: 92.3237% ( 20) 00:07:59.721 12149.366 - 12199.778: 92.5160% ( 24) 00:07:59.721 12199.778 - 12250.191: 92.6763% ( 20) 00:07:59.721 12250.191 - 12300.603: 92.8446% ( 21) 00:07:59.721 12300.603 - 12351.015: 93.0529% ( 26) 00:07:59.721 12351.015 - 12401.428: 93.2372% ( 23) 00:07:59.721 12401.428 - 12451.840: 93.4295% ( 24) 00:07:59.721 12451.840 - 12502.252: 93.6058% ( 22) 00:07:59.721 12502.252 - 12552.665: 93.7660% ( 20) 00:07:59.722 12552.665 - 12603.077: 93.9423% ( 22) 00:07:59.722 12603.077 - 12653.489: 94.1186% ( 22) 00:07:59.722 12653.489 - 12703.902: 94.2308% ( 14) 00:07:59.722 12703.902 - 12754.314: 94.4551% ( 28) 00:07:59.722 12754.314 - 12804.726: 94.6314% ( 22) 00:07:59.722 12804.726 - 12855.138: 94.7196% ( 11) 00:07:59.722 12855.138 - 12905.551: 94.8638% ( 18) 00:07:59.722 12905.551 - 13006.375: 95.2644% ( 50) 00:07:59.722 13006.375 - 13107.200: 95.5529% ( 36) 00:07:59.722 13107.200 - 13208.025: 95.7292% ( 22) 00:07:59.722 13208.025 - 13308.849: 95.8654% ( 17) 00:07:59.722 13308.849 - 13409.674: 95.9615% ( 12) 00:07:59.722 13409.674 - 13510.498: 96.0577% ( 12) 00:07:59.722 13510.498 - 13611.323: 96.1939% ( 17) 00:07:59.722 13611.323 - 13712.148: 96.3862% ( 24) 00:07:59.722 13712.148 - 13812.972: 96.5465% ( 20) 00:07:59.722 13812.972 - 13913.797: 96.7388% ( 24) 00:07:59.722 13913.797 - 14014.622: 96.8189% ( 10) 00:07:59.722 14014.622 - 14115.446: 96.9952% ( 22) 00:07:59.722 14115.446 - 14216.271: 97.1955% ( 25) 00:07:59.722 14216.271 - 14317.095: 97.3157% ( 15) 00:07:59.722 14317.095 - 14417.920: 97.4119% ( 12) 00:07:59.722 14417.920 - 14518.745: 97.5721% ( 20) 00:07:59.722 14518.745 - 14619.569: 97.6683% ( 12) 00:07:59.722 14619.569 - 14720.394: 97.8045% ( 17) 00:07:59.722 14720.394 - 14821.218: 97.9087% ( 13) 00:07:59.722 14821.218 - 14922.043: 98.0128% ( 13) 00:07:59.722 14922.043 - 15022.868: 98.1010% ( 11) 00:07:59.722 15022.868 - 15123.692: 98.2051% ( 13) 00:07:59.722 15123.692 - 15224.517: 98.2853% ( 10) 00:07:59.722 15224.517 - 15325.342: 98.3654% ( 10) 00:07:59.722 15325.342 - 15426.166: 98.5337% ( 21) 00:07:59.722 15426.166 - 15526.991: 98.6298% ( 12) 00:07:59.722 15526.991 - 15627.815: 98.7179% ( 11) 00:07:59.722 15627.815 - 15728.640: 98.8141% ( 12) 00:07:59.722 15728.640 - 15829.465: 98.8702% ( 7) 00:07:59.722 15829.465 - 15930.289: 98.9183% ( 6) 00:07:59.722 15930.289 - 16031.114: 98.9583% ( 5) 00:07:59.722 16031.114 - 16131.938: 98.9744% ( 2) 00:07:59.722 20568.222 - 20669.046: 98.9824% ( 1) 00:07:59.722 20669.046 - 20769.871: 99.0144% ( 4) 00:07:59.722 20769.871 - 20870.695: 99.0385% ( 3) 00:07:59.722 20870.695 - 20971.520: 99.0705% ( 4) 00:07:59.722 20971.520 - 21072.345: 99.0946% ( 3) 00:07:59.722 21072.345 - 21173.169: 99.1266% ( 4) 00:07:59.722 21173.169 - 21273.994: 99.1506% ( 3) 00:07:59.722 21273.994 - 21374.818: 99.1827% ( 4) 00:07:59.722 21374.818 - 21475.643: 99.2147% ( 4) 00:07:59.722 21475.643 - 21576.468: 99.2388% ( 3) 00:07:59.722 21576.468 - 21677.292: 99.2708% ( 4) 00:07:59.722 21677.292 - 21778.117: 99.2949% ( 3) 00:07:59.722 21778.117 - 21878.942: 99.3269% ( 4) 00:07:59.722 21878.942 - 21979.766: 99.3590% ( 4) 00:07:59.722 21979.766 - 22080.591: 99.3830% ( 3) 00:07:59.722 22080.591 - 22181.415: 99.4151% ( 4) 00:07:59.722 22181.415 - 22282.240: 99.4391% ( 3) 00:07:59.722 22282.240 - 22383.065: 99.4631% ( 3) 00:07:59.722 22383.065 - 22483.889: 99.4872% ( 3) 00:07:59.722 26012.751 - 26214.400: 99.5353% ( 6) 00:07:59.722 26214.400 - 26416.049: 99.5913% ( 7) 00:07:59.722 26416.049 - 26617.698: 99.6474% ( 7) 00:07:59.722 26617.698 - 26819.348: 99.7115% ( 8) 00:07:59.722 26819.348 - 27020.997: 99.7676% ( 7) 00:07:59.722 27020.997 - 27222.646: 99.8237% ( 7) 00:07:59.722 27222.646 - 27424.295: 99.8798% ( 7) 00:07:59.722 27424.295 - 27625.945: 99.9359% ( 7) 00:07:59.722 27625.945 - 27827.594: 99.9920% ( 7) 00:07:59.722 27827.594 - 28029.243: 100.0000% ( 1) 00:07:59.722 00:07:59.722 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.722 ============================================================================== 00:07:59.722 Range in us Cumulative IO count 00:07:59.722 6503.188 - 6553.600: 0.0638% ( 8) 00:07:59.722 6553.600 - 6604.012: 0.1594% ( 12) 00:07:59.722 6604.012 - 6654.425: 0.2710% ( 14) 00:07:59.722 6654.425 - 6704.837: 0.4066% ( 17) 00:07:59.722 6704.837 - 6755.249: 1.1798% ( 97) 00:07:59.722 6755.249 - 6805.662: 1.3473% ( 21) 00:07:59.722 6805.662 - 6856.074: 1.5226% ( 22) 00:07:59.722 6856.074 - 6906.486: 1.7618% ( 30) 00:07:59.722 6906.486 - 6956.898: 2.2242% ( 58) 00:07:59.722 6956.898 - 7007.311: 2.7503% ( 66) 00:07:59.722 7007.311 - 7057.723: 3.3482% ( 75) 00:07:59.722 7057.723 - 7108.135: 3.6751% ( 41) 00:07:59.722 7108.135 - 7158.548: 4.1295% ( 57) 00:07:59.722 7158.548 - 7208.960: 4.5679% ( 55) 00:07:59.722 7208.960 - 7259.372: 4.7353% ( 21) 00:07:59.722 7259.372 - 7309.785: 5.2376% ( 63) 00:07:59.722 7309.785 - 7360.197: 5.5086% ( 34) 00:07:59.722 7360.197 - 7410.609: 5.7637% ( 32) 00:07:59.722 7410.609 - 7461.022: 6.0188% ( 32) 00:07:59.722 7461.022 - 7511.434: 6.2899% ( 34) 00:07:59.722 7511.434 - 7561.846: 6.6725% ( 48) 00:07:59.722 7561.846 - 7612.258: 7.7647% ( 137) 00:07:59.722 7612.258 - 7662.671: 8.6894% ( 116) 00:07:59.722 7662.671 - 7713.083: 9.0721% ( 48) 00:07:59.722 7713.083 - 7763.495: 9.8374% ( 96) 00:07:59.722 7763.495 - 7813.908: 10.5230% ( 86) 00:07:59.722 7813.908 - 7864.320: 10.9774% ( 57) 00:07:59.722 7864.320 - 7914.732: 11.1607% ( 23) 00:07:59.722 7914.732 - 7965.145: 11.3520% ( 24) 00:07:59.722 7965.145 - 8015.557: 11.5195% ( 21) 00:07:59.722 8015.557 - 8065.969: 11.6948% ( 22) 00:07:59.722 8065.969 - 8116.382: 11.9180% ( 28) 00:07:59.722 8116.382 - 8166.794: 12.1173% ( 25) 00:07:59.722 8166.794 - 8217.206: 12.3485% ( 29) 00:07:59.722 8217.206 - 8267.618: 12.8667% ( 65) 00:07:59.722 8267.618 - 8318.031: 13.3371% ( 59) 00:07:59.722 8318.031 - 8368.443: 13.7277% ( 49) 00:07:59.722 8368.443 - 8418.855: 14.3256% ( 75) 00:07:59.722 8418.855 - 8469.268: 14.7003% ( 47) 00:07:59.722 8469.268 - 8519.680: 15.4098% ( 89) 00:07:59.722 8519.680 - 8570.092: 15.9518% ( 68) 00:07:59.722 8570.092 - 8620.505: 16.5099% ( 70) 00:07:59.722 8620.505 - 8670.917: 17.0998% ( 74) 00:07:59.722 8670.917 - 8721.329: 17.6419% ( 68) 00:07:59.722 8721.329 - 8771.742: 18.2557% ( 77) 00:07:59.722 8771.742 - 8822.154: 19.0131% ( 95) 00:07:59.722 8822.154 - 8872.566: 19.8023% ( 99) 00:07:59.722 8872.566 - 8922.978: 20.6314% ( 104) 00:07:59.722 8922.978 - 8973.391: 21.5163% ( 111) 00:07:59.722 8973.391 - 9023.803: 22.4490% ( 117) 00:07:59.722 9023.803 - 9074.215: 23.4534% ( 126) 00:07:59.722 9074.215 - 9124.628: 24.4659% ( 127) 00:07:59.722 9124.628 - 9175.040: 25.4305% ( 121) 00:07:59.722 9175.040 - 9225.452: 26.3632% ( 117) 00:07:59.722 9225.452 - 9275.865: 27.2003% ( 105) 00:07:59.722 9275.865 - 9326.277: 27.9018% ( 88) 00:07:59.722 9326.277 - 9376.689: 28.8983% ( 125) 00:07:59.722 9376.689 - 9427.102: 29.8390% ( 118) 00:07:59.722 9427.102 - 9477.514: 30.6282% ( 99) 00:07:59.722 9477.514 - 9527.926: 31.5768% ( 119) 00:07:59.722 9527.926 - 9578.338: 32.7009% ( 141) 00:07:59.722 9578.338 - 9628.751: 33.7293% ( 129) 00:07:59.722 9628.751 - 9679.163: 34.7895% ( 133) 00:07:59.722 9679.163 - 9729.575: 35.7302% ( 118) 00:07:59.722 9729.575 - 9779.988: 36.6709% ( 118) 00:07:59.722 9779.988 - 9830.400: 37.4601% ( 99) 00:07:59.722 9830.400 - 9880.812: 38.3450% ( 111) 00:07:59.722 9880.812 - 9931.225: 39.4053% ( 133) 00:07:59.722 9931.225 - 9981.637: 40.6888% ( 161) 00:07:59.722 9981.637 - 10032.049: 42.2194% ( 192) 00:07:59.722 10032.049 - 10082.462: 43.7181% ( 188) 00:07:59.722 10082.462 - 10132.874: 45.5198% ( 226) 00:07:59.722 10132.874 - 10183.286: 47.2895% ( 222) 00:07:59.722 10183.286 - 10233.698: 49.1789% ( 237) 00:07:59.722 10233.698 - 10284.111: 51.4828% ( 289) 00:07:59.722 10284.111 - 10334.523: 53.8584% ( 298) 00:07:59.722 10334.523 - 10384.935: 55.7239% ( 234) 00:07:59.722 10384.935 - 10435.348: 57.6690% ( 244) 00:07:59.722 10435.348 - 10485.760: 59.9888% ( 291) 00:07:59.722 10485.760 - 10536.172: 62.2290% ( 281) 00:07:59.722 10536.172 - 10586.585: 64.1980% ( 247) 00:07:59.722 10586.585 - 10636.997: 66.2388% ( 256) 00:07:59.722 10636.997 - 10687.409: 68.2717% ( 255) 00:07:59.722 10687.409 - 10737.822: 70.0893% ( 228) 00:07:59.722 10737.822 - 10788.234: 71.6040% ( 190) 00:07:59.722 10788.234 - 10838.646: 73.0070% ( 176) 00:07:59.722 10838.646 - 10889.058: 74.1550% ( 144) 00:07:59.722 10889.058 - 10939.471: 75.1435% ( 124) 00:07:59.722 10939.471 - 10989.883: 76.1001% ( 120) 00:07:59.722 10989.883 - 11040.295: 77.0488% ( 119) 00:07:59.722 11040.295 - 11090.708: 78.0214% ( 122) 00:07:59.722 11090.708 - 11141.120: 78.8425% ( 103) 00:07:59.722 11141.120 - 11191.532: 79.7353% ( 112) 00:07:59.722 11191.532 - 11241.945: 80.7079% ( 122) 00:07:59.722 11241.945 - 11292.357: 81.5928% ( 111) 00:07:59.722 11292.357 - 11342.769: 82.7966% ( 151) 00:07:59.722 11342.769 - 11393.182: 83.8409% ( 131) 00:07:59.722 11393.182 - 11443.594: 84.7497% ( 114) 00:07:59.722 11443.594 - 11494.006: 85.5469% ( 100) 00:07:59.722 11494.006 - 11544.418: 86.4557% ( 114) 00:07:59.722 11544.418 - 11594.831: 87.2848% ( 104) 00:07:59.722 11594.831 - 11645.243: 87.8428% ( 70) 00:07:59.722 11645.243 - 11695.655: 88.3371% ( 62) 00:07:59.722 11695.655 - 11746.068: 88.9270% ( 74) 00:07:59.722 11746.068 - 11796.480: 89.3814% ( 57) 00:07:59.722 11796.480 - 11846.892: 89.7879% ( 51) 00:07:59.722 11846.892 - 11897.305: 90.1945% ( 51) 00:07:59.722 11897.305 - 11947.717: 90.5772% ( 48) 00:07:59.722 11947.717 - 11998.129: 90.8881% ( 39) 00:07:59.722 11998.129 - 12048.542: 91.2309% ( 43) 00:07:59.722 12048.542 - 12098.954: 91.5497% ( 40) 00:07:59.722 12098.954 - 12149.366: 91.7889% ( 30) 00:07:59.722 12149.366 - 12199.778: 91.9563% ( 21) 00:07:59.722 12199.778 - 12250.191: 92.1317% ( 22) 00:07:59.722 12250.191 - 12300.603: 92.3390% ( 26) 00:07:59.722 12300.603 - 12351.015: 92.6738% ( 42) 00:07:59.722 12351.015 - 12401.428: 92.8731% ( 25) 00:07:59.722 12401.428 - 12451.840: 93.1202% ( 31) 00:07:59.722 12451.840 - 12502.252: 93.3036% ( 23) 00:07:59.722 12502.252 - 12552.665: 93.4550% ( 19) 00:07:59.722 12552.665 - 12603.077: 93.6145% ( 20) 00:07:59.722 12603.077 - 12653.489: 93.8297% ( 27) 00:07:59.722 12653.489 - 12703.902: 93.9174% ( 11) 00:07:59.722 12703.902 - 12754.314: 93.9971% ( 10) 00:07:59.722 12754.314 - 12804.726: 94.1327% ( 17) 00:07:59.722 12804.726 - 12855.138: 94.2283% ( 12) 00:07:59.722 12855.138 - 12905.551: 94.3080% ( 10) 00:07:59.722 12905.551 - 13006.375: 94.4117% ( 13) 00:07:59.722 13006.375 - 13107.200: 94.6189% ( 26) 00:07:59.722 13107.200 - 13208.025: 94.9139% ( 37) 00:07:59.722 13208.025 - 13308.849: 95.3205% ( 51) 00:07:59.722 13308.849 - 13409.674: 95.6234% ( 38) 00:07:59.722 13409.674 - 13510.498: 95.8466% ( 28) 00:07:59.722 13510.498 - 13611.323: 96.1097% ( 33) 00:07:59.722 13611.323 - 13712.148: 96.3807% ( 34) 00:07:59.722 13712.148 - 13812.972: 96.5561% ( 22) 00:07:59.722 13812.972 - 13913.797: 96.8112% ( 32) 00:07:59.722 13913.797 - 14014.622: 97.1620% ( 44) 00:07:59.722 14014.622 - 14115.446: 97.3772% ( 27) 00:07:59.722 14115.446 - 14216.271: 97.5845% ( 26) 00:07:59.722 14216.271 - 14317.095: 97.6802% ( 12) 00:07:59.722 14317.095 - 14417.920: 97.7758% ( 12) 00:07:59.722 14417.920 - 14518.745: 97.8954% ( 15) 00:07:59.722 14518.745 - 14619.569: 98.0708% ( 22) 00:07:59.722 14619.569 - 14720.394: 98.1505% ( 10) 00:07:59.722 14720.394 - 14821.218: 98.2382% ( 11) 00:07:59.722 14821.218 - 14922.043: 98.3498% ( 14) 00:07:59.722 14922.043 - 15022.868: 98.4774% ( 16) 00:07:59.722 15022.868 - 15123.692: 98.6209% ( 18) 00:07:59.722 15123.692 - 15224.517: 98.7564% ( 17) 00:07:59.722 15224.517 - 15325.342: 98.8999% ( 18) 00:07:59.722 15325.342 - 15426.166: 99.0274% ( 16) 00:07:59.722 15426.166 - 15526.991: 99.1470% ( 15) 00:07:59.722 15526.991 - 15627.815: 99.2188% ( 9) 00:07:59.722 15627.815 - 15728.640: 99.2985% ( 10) 00:07:59.722 15728.640 - 15829.465: 99.3622% ( 8) 00:07:59.722 15829.465 - 15930.289: 99.4101% ( 6) 00:07:59.722 15930.289 - 16031.114: 99.4420% ( 4) 00:07:59.722 16031.114 - 16131.938: 99.4739% ( 4) 00:07:59.722 16131.938 - 16232.763: 99.4898% ( 2) 00:07:59.722 19862.449 - 19963.274: 99.4978% ( 1) 00:07:59.722 19963.274 - 20064.098: 99.5217% ( 3) 00:07:59.722 20064.098 - 20164.923: 99.5536% ( 4) 00:07:59.722 20164.923 - 20265.748: 99.5775% ( 3) 00:07:59.722 20265.748 - 20366.572: 99.6094% ( 4) 00:07:59.722 20366.572 - 20467.397: 99.6333% ( 3) 00:07:59.722 20467.397 - 20568.222: 99.6652% ( 4) 00:07:59.722 20568.222 - 20669.046: 99.6971% ( 4) 00:07:59.722 20669.046 - 20769.871: 99.7210% ( 3) 00:07:59.722 20769.871 - 20870.695: 99.7449% ( 3) 00:07:59.722 20870.695 - 20971.520: 99.7768% ( 4) 00:07:59.722 20971.520 - 21072.345: 99.8087% ( 4) 00:07:59.722 21072.345 - 21173.169: 99.8326% ( 3) 00:07:59.722 21173.169 - 21273.994: 99.8645% ( 4) 00:07:59.722 21273.994 - 21374.818: 99.8884% ( 3) 00:07:59.722 21374.818 - 21475.643: 99.9203% ( 4) 00:07:59.722 21475.643 - 21576.468: 99.9442% ( 3) 00:07:59.722 21576.468 - 21677.292: 99.9761% ( 4) 00:07:59.722 21677.292 - 21778.117: 100.0000% ( 3) 00:07:59.722 00:07:59.982 19:06:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:59.982 00:07:59.982 real 0m2.539s 00:07:59.982 user 0m2.218s 00:07:59.982 sys 0m0.211s 00:07:59.982 19:06:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.982 ************************************ 00:07:59.982 END TEST nvme_perf 00:07:59.982 ************************************ 00:07:59.982 19:06:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.982 19:06:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:59.982 19:06:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:59.982 19:06:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.982 19:06:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.982 ************************************ 00:07:59.982 START TEST nvme_hello_world 00:07:59.982 ************************************ 00:07:59.982 19:06:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:00.242 Initializing NVMe Controllers 00:08:00.242 Attached to 0000:00:10.0 00:08:00.242 Namespace ID: 1 size: 6GB 00:08:00.242 Attached to 0000:00:11.0 00:08:00.242 Namespace ID: 1 size: 5GB 00:08:00.242 Attached to 0000:00:13.0 00:08:00.242 Namespace ID: 1 size: 1GB 00:08:00.242 Attached to 0000:00:12.0 00:08:00.242 Namespace ID: 1 size: 4GB 00:08:00.242 Namespace ID: 2 size: 4GB 00:08:00.242 Namespace ID: 3 size: 4GB 00:08:00.242 Initialization complete. 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 INFO: using host memory buffer for IO 00:08:00.242 Hello world! 00:08:00.242 00:08:00.242 real 0m0.241s 00:08:00.242 user 0m0.085s 00:08:00.242 sys 0m0.112s 00:08:00.242 19:06:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.242 19:06:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.242 ************************************ 00:08:00.242 END TEST nvme_hello_world 00:08:00.242 ************************************ 00:08:00.242 19:06:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:00.242 19:06:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.242 19:06:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.242 19:06:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.242 ************************************ 00:08:00.242 START TEST nvme_sgl 00:08:00.242 ************************************ 00:08:00.242 19:06:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:00.501 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:00.501 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:00.501 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:00.501 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:00.501 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:00.501 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:00.501 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:00.501 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:00.501 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:00.501 NVMe Readv/Writev Request test 00:08:00.501 Attached to 0000:00:10.0 00:08:00.501 Attached to 0000:00:11.0 00:08:00.501 Attached to 0000:00:13.0 00:08:00.501 Attached to 0000:00:12.0 00:08:00.501 0000:00:10.0: build_io_request_2 test passed 00:08:00.501 0000:00:10.0: build_io_request_4 test passed 00:08:00.501 0000:00:10.0: build_io_request_5 test passed 00:08:00.501 0000:00:10.0: build_io_request_6 test passed 00:08:00.501 0000:00:10.0: build_io_request_7 test passed 00:08:00.501 0000:00:10.0: build_io_request_10 test passed 00:08:00.501 0000:00:11.0: build_io_request_2 test passed 00:08:00.501 0000:00:11.0: build_io_request_4 test passed 00:08:00.501 0000:00:11.0: build_io_request_5 test passed 00:08:00.501 0000:00:11.0: build_io_request_6 test passed 00:08:00.501 0000:00:11.0: build_io_request_7 test passed 00:08:00.501 0000:00:11.0: build_io_request_10 test passed 00:08:00.501 Cleaning up... 00:08:00.501 00:08:00.501 real 0m0.315s 00:08:00.501 user 0m0.150s 00:08:00.501 sys 0m0.104s 00:08:00.501 19:06:10 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.501 19:06:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:00.501 ************************************ 00:08:00.501 END TEST nvme_sgl 00:08:00.501 ************************************ 00:08:00.501 19:06:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:00.501 19:06:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.501 19:06:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.501 19:06:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.501 ************************************ 00:08:00.501 START TEST nvme_e2edp 00:08:00.501 ************************************ 00:08:00.502 19:06:10 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:00.760 NVMe Write/Read with End-to-End data protection test 00:08:00.760 Attached to 0000:00:10.0 00:08:00.760 Attached to 0000:00:11.0 00:08:00.760 Attached to 0000:00:13.0 00:08:00.760 Attached to 0000:00:12.0 00:08:00.760 Cleaning up... 00:08:00.760 00:08:00.760 real 0m0.219s 00:08:00.760 user 0m0.070s 00:08:00.760 sys 0m0.105s 00:08:00.760 19:06:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.760 19:06:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:00.760 ************************************ 00:08:00.760 END TEST nvme_e2edp 00:08:00.760 ************************************ 00:08:00.760 19:06:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:00.760 19:06:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.760 19:06:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.760 19:06:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.760 ************************************ 00:08:00.760 START TEST nvme_reserve 00:08:00.760 ************************************ 00:08:00.760 19:06:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:01.018 ===================================================== 00:08:01.018 NVMe Controller at PCI bus 0, device 16, function 0 00:08:01.018 ===================================================== 00:08:01.018 Reservations: Not Supported 00:08:01.018 ===================================================== 00:08:01.018 NVMe Controller at PCI bus 0, device 17, function 0 00:08:01.018 ===================================================== 00:08:01.018 Reservations: Not Supported 00:08:01.018 ===================================================== 00:08:01.018 NVMe Controller at PCI bus 0, device 19, function 0 00:08:01.018 ===================================================== 00:08:01.018 Reservations: Not Supported 00:08:01.018 ===================================================== 00:08:01.018 NVMe Controller at PCI bus 0, device 18, function 0 00:08:01.018 ===================================================== 00:08:01.018 Reservations: Not Supported 00:08:01.018 Reservation test passed 00:08:01.018 00:08:01.018 real 0m0.220s 00:08:01.018 user 0m0.071s 00:08:01.018 sys 0m0.100s 00:08:01.018 19:06:10 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.018 ************************************ 00:08:01.018 END TEST nvme_reserve 00:08:01.018 ************************************ 00:08:01.018 19:06:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:01.018 19:06:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:01.018 19:06:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.018 19:06:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.018 19:06:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.018 ************************************ 00:08:01.018 START TEST nvme_err_injection 00:08:01.018 ************************************ 00:08:01.018 19:06:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:01.277 NVMe Error Injection test 00:08:01.277 Attached to 0000:00:10.0 00:08:01.277 Attached to 0000:00:11.0 00:08:01.277 Attached to 0000:00:13.0 00:08:01.277 Attached to 0000:00:12.0 00:08:01.277 0000:00:12.0: get features failed as expected 00:08:01.277 0000:00:10.0: get features failed as expected 00:08:01.277 0000:00:11.0: get features failed as expected 00:08:01.277 0000:00:13.0: get features failed as expected 00:08:01.277 0000:00:10.0: get features successfully as expected 00:08:01.277 0000:00:11.0: get features successfully as expected 00:08:01.277 0000:00:13.0: get features successfully as expected 00:08:01.277 0000:00:12.0: get features successfully as expected 00:08:01.277 0000:00:10.0: read failed as expected 00:08:01.277 0000:00:11.0: read failed as expected 00:08:01.277 0000:00:13.0: read failed as expected 00:08:01.277 0000:00:12.0: read failed as expected 00:08:01.277 0000:00:10.0: read successfully as expected 00:08:01.277 0000:00:11.0: read successfully as expected 00:08:01.277 0000:00:13.0: read successfully as expected 00:08:01.277 0000:00:12.0: read successfully as expected 00:08:01.277 Cleaning up... 00:08:01.277 00:08:01.277 real 0m0.247s 00:08:01.277 user 0m0.088s 00:08:01.277 sys 0m0.108s 00:08:01.277 19:06:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.277 19:06:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:01.277 ************************************ 00:08:01.277 END TEST nvme_err_injection 00:08:01.277 ************************************ 00:08:01.277 19:06:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:01.277 19:06:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:01.277 19:06:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.277 19:06:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.277 ************************************ 00:08:01.277 START TEST nvme_overhead 00:08:01.277 ************************************ 00:08:01.277 19:06:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:02.653 Initializing NVMe Controllers 00:08:02.653 Attached to 0000:00:10.0 00:08:02.653 Attached to 0000:00:11.0 00:08:02.653 Attached to 0000:00:13.0 00:08:02.653 Attached to 0000:00:12.0 00:08:02.653 Initialization complete. Launching workers. 00:08:02.653 submit (in ns) avg, min, max = 12063.1, 10486.2, 53880.0 00:08:02.653 complete (in ns) avg, min, max = 7602.0, 7212.3, 73182.3 00:08:02.653 00:08:02.653 Submit histogram 00:08:02.653 ================ 00:08:02.653 Range in us Cumulative Count 00:08:02.653 10.486 - 10.535: 0.0058% ( 1) 00:08:02.653 10.732 - 10.782: 0.0116% ( 1) 00:08:02.653 10.978 - 11.028: 0.0174% ( 1) 00:08:02.653 11.175 - 11.225: 0.0233% ( 1) 00:08:02.653 11.274 - 11.323: 0.0291% ( 1) 00:08:02.653 11.323 - 11.372: 0.0523% ( 4) 00:08:02.653 11.372 - 11.422: 0.1802% ( 22) 00:08:02.653 11.422 - 11.471: 0.8547% ( 116) 00:08:02.653 11.471 - 11.520: 2.9767% ( 365) 00:08:02.653 11.520 - 11.569: 6.9419% ( 682) 00:08:02.653 11.569 - 11.618: 13.1628% ( 1070) 00:08:02.653 11.618 - 11.668: 21.1628% ( 1376) 00:08:02.653 11.668 - 11.717: 30.3605% ( 1582) 00:08:02.653 11.717 - 11.766: 39.9593% ( 1651) 00:08:02.653 11.766 - 11.815: 48.6163% ( 1489) 00:08:02.653 11.815 - 11.865: 56.2791% ( 1318) 00:08:02.653 11.865 - 11.914: 62.6047% ( 1088) 00:08:02.653 11.914 - 11.963: 67.5523% ( 851) 00:08:02.653 11.963 - 12.012: 71.4302% ( 667) 00:08:02.653 12.012 - 12.062: 74.7849% ( 577) 00:08:02.653 12.062 - 12.111: 77.7442% ( 509) 00:08:02.653 12.111 - 12.160: 80.1977% ( 422) 00:08:02.653 12.160 - 12.209: 82.1512% ( 336) 00:08:02.654 12.209 - 12.258: 83.9012% ( 301) 00:08:02.654 12.258 - 12.308: 85.5058% ( 276) 00:08:02.654 12.308 - 12.357: 86.8953% ( 239) 00:08:02.654 12.357 - 12.406: 88.2500% ( 233) 00:08:02.654 12.406 - 12.455: 89.3663% ( 192) 00:08:02.654 12.455 - 12.505: 90.4709% ( 190) 00:08:02.654 12.505 - 12.554: 91.3081% ( 144) 00:08:02.654 12.554 - 12.603: 92.2035% ( 154) 00:08:02.654 12.603 - 12.702: 93.5523% ( 232) 00:08:02.654 12.702 - 12.800: 94.7500% ( 206) 00:08:02.654 12.800 - 12.898: 95.5291% ( 134) 00:08:02.654 12.898 - 12.997: 96.0988% ( 98) 00:08:02.654 12.997 - 13.095: 96.4884% ( 67) 00:08:02.654 13.095 - 13.194: 96.6919% ( 35) 00:08:02.654 13.194 - 13.292: 96.8721% ( 31) 00:08:02.654 13.292 - 13.391: 96.9651% ( 16) 00:08:02.654 13.391 - 13.489: 97.0640% ( 17) 00:08:02.654 13.489 - 13.588: 97.1163% ( 9) 00:08:02.654 13.588 - 13.686: 97.1628% ( 8) 00:08:02.654 13.686 - 13.785: 97.1744% ( 2) 00:08:02.654 13.785 - 13.883: 97.2384% ( 11) 00:08:02.654 13.883 - 13.982: 97.3663% ( 22) 00:08:02.654 13.982 - 14.080: 97.4593% ( 16) 00:08:02.654 14.080 - 14.178: 97.5349% ( 13) 00:08:02.654 14.178 - 14.277: 97.6337% ( 17) 00:08:02.654 14.277 - 14.375: 97.7151% ( 14) 00:08:02.654 14.375 - 14.474: 97.7616% ( 8) 00:08:02.654 14.474 - 14.572: 97.8779% ( 20) 00:08:02.654 14.572 - 14.671: 97.9186% ( 7) 00:08:02.654 14.671 - 14.769: 97.9709% ( 9) 00:08:02.654 14.769 - 14.868: 98.0116% ( 7) 00:08:02.654 14.868 - 14.966: 98.0465% ( 6) 00:08:02.654 14.966 - 15.065: 98.0872% ( 7) 00:08:02.654 15.065 - 15.163: 98.1163% ( 5) 00:08:02.654 15.163 - 15.262: 98.1395% ( 4) 00:08:02.654 15.262 - 15.360: 98.1628% ( 4) 00:08:02.654 15.360 - 15.458: 98.1744% ( 2) 00:08:02.654 15.458 - 15.557: 98.1860% ( 2) 00:08:02.654 15.557 - 15.655: 98.1977% ( 2) 00:08:02.654 15.655 - 15.754: 98.2151% ( 3) 00:08:02.654 15.754 - 15.852: 98.2209% ( 1) 00:08:02.654 15.852 - 15.951: 98.2384% ( 3) 00:08:02.654 15.951 - 16.049: 98.2674% ( 5) 00:08:02.654 16.049 - 16.148: 98.2965% ( 5) 00:08:02.654 16.148 - 16.246: 98.3140% ( 3) 00:08:02.654 16.345 - 16.443: 98.3372% ( 4) 00:08:02.654 16.443 - 16.542: 98.3605% ( 4) 00:08:02.654 16.542 - 16.640: 98.3721% ( 2) 00:08:02.654 16.640 - 16.738: 98.3953% ( 4) 00:08:02.654 16.738 - 16.837: 98.4070% ( 2) 00:08:02.654 16.837 - 16.935: 98.4128% ( 1) 00:08:02.654 16.935 - 17.034: 98.4360% ( 4) 00:08:02.654 17.034 - 17.132: 98.4535% ( 3) 00:08:02.654 17.132 - 17.231: 98.4826% ( 5) 00:08:02.654 17.231 - 17.329: 98.5174% ( 6) 00:08:02.654 17.329 - 17.428: 98.5581% ( 7) 00:08:02.654 17.428 - 17.526: 98.6047% ( 8) 00:08:02.654 17.526 - 17.625: 98.6860% ( 14) 00:08:02.654 17.625 - 17.723: 98.7384% ( 9) 00:08:02.654 17.723 - 17.822: 98.8372% ( 17) 00:08:02.654 17.822 - 17.920: 98.9012% ( 11) 00:08:02.654 17.920 - 18.018: 98.9360% ( 6) 00:08:02.654 18.018 - 18.117: 98.9884% ( 9) 00:08:02.654 18.117 - 18.215: 99.0814% ( 16) 00:08:02.654 18.215 - 18.314: 99.1337% ( 9) 00:08:02.654 18.314 - 18.412: 99.2209% ( 15) 00:08:02.654 18.412 - 18.511: 99.3023% ( 14) 00:08:02.654 18.511 - 18.609: 99.3779% ( 13) 00:08:02.654 18.609 - 18.708: 99.4186% ( 7) 00:08:02.654 18.708 - 18.806: 99.4360% ( 3) 00:08:02.654 18.806 - 18.905: 99.5000% ( 11) 00:08:02.654 18.905 - 19.003: 99.5233% ( 4) 00:08:02.654 19.003 - 19.102: 99.5640% ( 7) 00:08:02.654 19.102 - 19.200: 99.5756% ( 2) 00:08:02.654 19.200 - 19.298: 99.6047% ( 5) 00:08:02.654 19.298 - 19.397: 99.6221% ( 3) 00:08:02.654 19.397 - 19.495: 99.6395% ( 3) 00:08:02.654 19.495 - 19.594: 99.6453% ( 1) 00:08:02.654 19.594 - 19.692: 99.6860% ( 7) 00:08:02.654 19.692 - 19.791: 99.6977% ( 2) 00:08:02.654 19.791 - 19.889: 99.7093% ( 2) 00:08:02.654 19.889 - 19.988: 99.7326% ( 4) 00:08:02.654 20.086 - 20.185: 99.7442% ( 2) 00:08:02.654 20.185 - 20.283: 99.7616% ( 3) 00:08:02.654 20.283 - 20.382: 99.7674% ( 1) 00:08:02.654 20.382 - 20.480: 99.7733% ( 1) 00:08:02.654 20.480 - 20.578: 99.7907% ( 3) 00:08:02.654 20.677 - 20.775: 99.7965% ( 1) 00:08:02.654 20.972 - 21.071: 99.8023% ( 1) 00:08:02.654 21.169 - 21.268: 99.8081% ( 1) 00:08:02.654 21.366 - 21.465: 99.8140% ( 1) 00:08:02.654 21.465 - 21.563: 99.8198% ( 1) 00:08:02.654 21.662 - 21.760: 99.8430% ( 4) 00:08:02.654 21.858 - 21.957: 99.8488% ( 1) 00:08:02.654 22.055 - 22.154: 99.8605% ( 2) 00:08:02.654 22.154 - 22.252: 99.8663% ( 1) 00:08:02.654 22.548 - 22.646: 99.8721% ( 1) 00:08:02.654 22.646 - 22.745: 99.8779% ( 1) 00:08:02.654 22.745 - 22.843: 99.8837% ( 1) 00:08:02.654 23.040 - 23.138: 99.9012% ( 3) 00:08:02.654 23.434 - 23.532: 99.9070% ( 1) 00:08:02.654 23.631 - 23.729: 99.9128% ( 1) 00:08:02.654 24.025 - 24.123: 99.9244% ( 2) 00:08:02.654 25.108 - 25.206: 99.9302% ( 1) 00:08:02.654 25.403 - 25.600: 99.9360% ( 1) 00:08:02.654 25.797 - 25.994: 99.9419% ( 1) 00:08:02.654 26.978 - 27.175: 99.9477% ( 1) 00:08:02.654 27.963 - 28.160: 99.9535% ( 1) 00:08:02.654 28.357 - 28.554: 99.9593% ( 1) 00:08:02.654 30.129 - 30.326: 99.9651% ( 1) 00:08:02.654 38.597 - 38.794: 99.9709% ( 1) 00:08:02.654 42.732 - 42.929: 99.9767% ( 1) 00:08:02.654 46.671 - 46.868: 99.9826% ( 1) 00:08:02.654 48.443 - 48.640: 99.9884% ( 1) 00:08:02.654 50.018 - 50.215: 99.9942% ( 1) 00:08:02.654 53.563 - 53.957: 100.0000% ( 1) 00:08:02.654 00:08:02.654 Complete histogram 00:08:02.654 ================== 00:08:02.654 Range in us Cumulative Count 00:08:02.654 7.188 - 7.237: 0.0058% ( 1) 00:08:02.654 7.237 - 7.286: 0.3895% ( 66) 00:08:02.654 7.286 - 7.335: 4.5116% ( 709) 00:08:02.654 7.335 - 7.385: 17.7326% ( 2274) 00:08:02.654 7.385 - 7.434: 39.0523% ( 3667) 00:08:02.654 7.434 - 7.483: 60.4709% ( 3684) 00:08:02.654 7.483 - 7.532: 76.7093% ( 2793) 00:08:02.654 7.532 - 7.582: 86.6395% ( 1708) 00:08:02.654 7.582 - 7.631: 91.7965% ( 887) 00:08:02.654 7.631 - 7.680: 94.7267% ( 504) 00:08:02.654 7.680 - 7.729: 96.2500% ( 262) 00:08:02.654 7.729 - 7.778: 97.1919% ( 162) 00:08:02.654 7.778 - 7.828: 97.6919% ( 86) 00:08:02.654 7.828 - 7.877: 97.9593% ( 46) 00:08:02.654 7.877 - 7.926: 98.1512% ( 33) 00:08:02.654 7.926 - 7.975: 98.2558% ( 18) 00:08:02.654 7.975 - 8.025: 98.3140% ( 10) 00:08:02.654 8.025 - 8.074: 98.3547% ( 7) 00:08:02.654 8.074 - 8.123: 98.3895% ( 6) 00:08:02.654 8.123 - 8.172: 98.4012% ( 2) 00:08:02.654 8.172 - 8.222: 98.4128% ( 2) 00:08:02.654 8.320 - 8.369: 98.4186% ( 1) 00:08:02.654 8.369 - 8.418: 98.4244% ( 1) 00:08:02.654 8.468 - 8.517: 98.4302% ( 1) 00:08:02.654 8.615 - 8.665: 98.4419% ( 2) 00:08:02.654 8.862 - 8.911: 98.4477% ( 1) 00:08:02.654 8.911 - 8.960: 98.4535% ( 1) 00:08:02.654 8.960 - 9.009: 98.4593% ( 1) 00:08:02.654 9.009 - 9.058: 98.4651% ( 1) 00:08:02.654 9.698 - 9.748: 98.4709% ( 1) 00:08:02.654 10.978 - 11.028: 98.4767% ( 1) 00:08:02.654 11.225 - 11.274: 98.4826% ( 1) 00:08:02.654 11.323 - 11.372: 98.4884% ( 1) 00:08:02.654 11.569 - 11.618: 98.5000% ( 2) 00:08:02.654 11.618 - 11.668: 98.5058% ( 1) 00:08:02.654 11.815 - 11.865: 98.5116% ( 1) 00:08:02.654 11.865 - 11.914: 98.5174% ( 1) 00:08:02.654 12.308 - 12.357: 98.5233% ( 1) 00:08:02.654 12.455 - 12.505: 98.5291% ( 1) 00:08:02.654 12.505 - 12.554: 98.5349% ( 1) 00:08:02.654 12.603 - 12.702: 98.5465% ( 2) 00:08:02.654 12.702 - 12.800: 98.5523% ( 1) 00:08:02.654 12.800 - 12.898: 98.5581% ( 1) 00:08:02.654 12.898 - 12.997: 98.5698% ( 2) 00:08:02.654 12.997 - 13.095: 98.5930% ( 4) 00:08:02.654 13.095 - 13.194: 98.6221% ( 5) 00:08:02.654 13.194 - 13.292: 98.6628% ( 7) 00:08:02.654 13.292 - 13.391: 98.7209% ( 10) 00:08:02.654 13.391 - 13.489: 98.8198% ( 17) 00:08:02.654 13.489 - 13.588: 98.8895% ( 12) 00:08:02.654 13.588 - 13.686: 98.9360% ( 8) 00:08:02.654 13.686 - 13.785: 98.9884% ( 9) 00:08:02.654 13.785 - 13.883: 99.0581% ( 12) 00:08:02.654 13.883 - 13.982: 99.1279% ( 12) 00:08:02.654 13.982 - 14.080: 99.1802% ( 9) 00:08:02.654 14.080 - 14.178: 99.2500% ( 12) 00:08:02.655 14.178 - 14.277: 99.3198% ( 12) 00:08:02.655 14.277 - 14.375: 99.3779% ( 10) 00:08:02.655 14.375 - 14.474: 99.4186% ( 7) 00:08:02.655 14.474 - 14.572: 99.4535% ( 6) 00:08:02.655 14.572 - 14.671: 99.4884% ( 6) 00:08:02.655 14.671 - 14.769: 99.5291% ( 7) 00:08:02.655 14.769 - 14.868: 99.5465% ( 3) 00:08:02.655 14.868 - 14.966: 99.5756% ( 5) 00:08:02.655 14.966 - 15.065: 99.6105% ( 6) 00:08:02.655 15.065 - 15.163: 99.6221% ( 2) 00:08:02.655 15.163 - 15.262: 99.6395% ( 3) 00:08:02.655 15.360 - 15.458: 99.6512% ( 2) 00:08:02.655 15.458 - 15.557: 99.6686% ( 3) 00:08:02.655 15.557 - 15.655: 99.6744% ( 1) 00:08:02.655 15.655 - 15.754: 99.6919% ( 3) 00:08:02.655 15.754 - 15.852: 99.7035% ( 2) 00:08:02.655 15.852 - 15.951: 99.7093% ( 1) 00:08:02.655 15.951 - 16.049: 99.7151% ( 1) 00:08:02.655 16.049 - 16.148: 99.7209% ( 1) 00:08:02.655 16.345 - 16.443: 99.7267% ( 1) 00:08:02.655 16.443 - 16.542: 99.7326% ( 1) 00:08:02.655 16.640 - 16.738: 99.7384% ( 1) 00:08:02.655 16.837 - 16.935: 99.7500% ( 2) 00:08:02.655 17.132 - 17.231: 99.7558% ( 1) 00:08:02.655 17.231 - 17.329: 99.7616% ( 1) 00:08:02.655 17.329 - 17.428: 99.7674% ( 1) 00:08:02.655 17.428 - 17.526: 99.7733% ( 1) 00:08:02.655 17.625 - 17.723: 99.7791% ( 1) 00:08:02.655 17.822 - 17.920: 99.7849% ( 1) 00:08:02.655 18.215 - 18.314: 99.7965% ( 2) 00:08:02.655 18.314 - 18.412: 99.8023% ( 1) 00:08:02.655 18.412 - 18.511: 99.8140% ( 2) 00:08:02.655 18.511 - 18.609: 99.8256% ( 2) 00:08:02.655 18.609 - 18.708: 99.8314% ( 1) 00:08:02.655 18.708 - 18.806: 99.8430% ( 2) 00:08:02.655 18.806 - 18.905: 99.8547% ( 2) 00:08:02.655 18.905 - 19.003: 99.8605% ( 1) 00:08:02.655 19.003 - 19.102: 99.8663% ( 1) 00:08:02.655 19.298 - 19.397: 99.8721% ( 1) 00:08:02.655 19.397 - 19.495: 99.8779% ( 1) 00:08:02.655 19.495 - 19.594: 99.8895% ( 2) 00:08:02.655 19.791 - 19.889: 99.8953% ( 1) 00:08:02.655 20.185 - 20.283: 99.9070% ( 2) 00:08:02.655 20.677 - 20.775: 99.9128% ( 1) 00:08:02.655 20.874 - 20.972: 99.9186% ( 1) 00:08:02.655 24.714 - 24.812: 99.9302% ( 2) 00:08:02.655 25.994 - 26.191: 99.9360% ( 1) 00:08:02.655 26.191 - 26.388: 99.9419% ( 1) 00:08:02.655 30.523 - 30.720: 99.9477% ( 1) 00:08:02.655 37.022 - 37.218: 99.9535% ( 1) 00:08:02.655 44.702 - 44.898: 99.9593% ( 1) 00:08:02.655 45.095 - 45.292: 99.9651% ( 1) 00:08:02.655 45.292 - 45.489: 99.9709% ( 1) 00:08:02.655 47.065 - 47.262: 99.9767% ( 1) 00:08:02.655 48.837 - 49.034: 99.9826% ( 1) 00:08:02.655 50.412 - 50.806: 99.9884% ( 1) 00:08:02.655 55.926 - 56.320: 99.9942% ( 1) 00:08:02.655 72.862 - 73.255: 100.0000% ( 1) 00:08:02.655 00:08:02.655 00:08:02.655 real 0m1.225s 00:08:02.655 user 0m1.066s 00:08:02.655 sys 0m0.110s 00:08:02.655 19:06:12 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.655 19:06:12 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 ************************************ 00:08:02.655 END TEST nvme_overhead 00:08:02.655 ************************************ 00:08:02.655 19:06:12 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:02.655 19:06:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:02.655 19:06:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.655 19:06:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 ************************************ 00:08:02.655 START TEST nvme_arbitration 00:08:02.655 ************************************ 00:08:02.655 19:06:12 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:05.947 Initializing NVMe Controllers 00:08:05.947 Attached to 0000:00:10.0 00:08:05.947 Attached to 0000:00:11.0 00:08:05.947 Attached to 0000:00:13.0 00:08:05.947 Attached to 0000:00:12.0 00:08:05.947 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:05.947 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:05.947 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:05.947 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:05.947 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:05.947 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:05.947 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:05.947 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:05.947 Initialization complete. Launching workers. 00:08:05.947 Starting thread on core 1 with urgent priority queue 00:08:05.947 Starting thread on core 2 with urgent priority queue 00:08:05.947 Starting thread on core 3 with urgent priority queue 00:08:05.947 Starting thread on core 0 with urgent priority queue 00:08:05.947 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:05.947 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:05.947 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:05.947 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:05.947 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:08:05.947 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:08:05.947 ======================================================== 00:08:05.947 00:08:05.947 00:08:05.947 real 0m3.333s 00:08:05.947 user 0m9.283s 00:08:05.947 sys 0m0.122s 00:08:05.947 19:06:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.947 ************************************ 00:08:05.947 END TEST nvme_arbitration 00:08:05.947 ************************************ 00:08:05.947 19:06:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:05.947 19:06:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:05.947 19:06:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.947 19:06:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.947 19:06:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.947 ************************************ 00:08:05.947 START TEST nvme_single_aen 00:08:05.947 ************************************ 00:08:05.947 19:06:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:06.209 Asynchronous Event Request test 00:08:06.209 Attached to 0000:00:10.0 00:08:06.209 Attached to 0000:00:11.0 00:08:06.209 Attached to 0000:00:13.0 00:08:06.209 Attached to 0000:00:12.0 00:08:06.209 Reset controller to setup AER completions for this process 00:08:06.209 Registering asynchronous event callbacks... 00:08:06.209 Getting orig temperature thresholds of all controllers 00:08:06.209 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.209 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.209 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.209 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.209 Setting all controllers temperature threshold low to trigger AER 00:08:06.209 Waiting for all controllers temperature threshold to be set lower 00:08:06.209 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.209 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:06.209 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.209 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:06.209 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.209 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:06.209 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.209 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:06.209 Waiting for all controllers to trigger AER and reset threshold 00:08:06.209 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.209 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.209 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.209 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.209 Cleaning up... 00:08:06.209 00:08:06.209 real 0m0.258s 00:08:06.209 user 0m0.094s 00:08:06.209 sys 0m0.117s 00:08:06.209 19:06:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.209 ************************************ 00:08:06.209 END TEST nvme_single_aen 00:08:06.209 ************************************ 00:08:06.209 19:06:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:06.470 19:06:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:06.470 19:06:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.470 19:06:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.470 19:06:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.470 ************************************ 00:08:06.470 START TEST nvme_doorbell_aers 00:08:06.470 ************************************ 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.470 19:06:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:06.732 [2024-11-27 19:06:16.176538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:16.706 Executing: test_write_invalid_db 00:08:16.706 Waiting for AER completion... 00:08:16.706 Failure: test_write_invalid_db 00:08:16.706 00:08:16.706 Executing: test_invalid_db_write_overflow_sq 00:08:16.706 Waiting for AER completion... 00:08:16.706 Failure: test_invalid_db_write_overflow_sq 00:08:16.706 00:08:16.706 Executing: test_invalid_db_write_overflow_cq 00:08:16.706 Waiting for AER completion... 00:08:16.706 Failure: test_invalid_db_write_overflow_cq 00:08:16.706 00:08:16.706 19:06:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.706 19:06:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:16.706 [2024-11-27 19:06:26.175068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:26.698 Executing: test_write_invalid_db 00:08:26.698 Waiting for AER completion... 00:08:26.698 Failure: test_write_invalid_db 00:08:26.698 00:08:26.698 Executing: test_invalid_db_write_overflow_sq 00:08:26.698 Waiting for AER completion... 00:08:26.698 Failure: test_invalid_db_write_overflow_sq 00:08:26.698 00:08:26.698 Executing: test_invalid_db_write_overflow_cq 00:08:26.698 Waiting for AER completion... 00:08:26.698 Failure: test_invalid_db_write_overflow_cq 00:08:26.698 00:08:26.698 19:06:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:26.698 19:06:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:26.698 [2024-11-27 19:06:36.247050] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:36.667 Executing: test_write_invalid_db 00:08:36.668 Waiting for AER completion... 00:08:36.668 Failure: test_write_invalid_db 00:08:36.668 00:08:36.668 Executing: test_invalid_db_write_overflow_sq 00:08:36.668 Waiting for AER completion... 00:08:36.668 Failure: test_invalid_db_write_overflow_sq 00:08:36.668 00:08:36.668 Executing: test_invalid_db_write_overflow_cq 00:08:36.668 Waiting for AER completion... 00:08:36.668 Failure: test_invalid_db_write_overflow_cq 00:08:36.668 00:08:36.668 19:06:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.668 19:06:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:36.668 [2024-11-27 19:06:46.238480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 Executing: test_write_invalid_db 00:08:46.688 Waiting for AER completion... 00:08:46.688 Failure: test_write_invalid_db 00:08:46.688 00:08:46.688 Executing: test_invalid_db_write_overflow_sq 00:08:46.688 Waiting for AER completion... 00:08:46.688 Failure: test_invalid_db_write_overflow_sq 00:08:46.688 00:08:46.688 Executing: test_invalid_db_write_overflow_cq 00:08:46.688 Waiting for AER completion... 00:08:46.688 Failure: test_invalid_db_write_overflow_cq 00:08:46.688 00:08:46.688 00:08:46.688 real 0m40.204s 00:08:46.688 user 0m34.117s 00:08:46.688 sys 0m5.694s 00:08:46.688 19:06:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.688 19:06:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:46.688 ************************************ 00:08:46.688 END TEST nvme_doorbell_aers 00:08:46.688 ************************************ 00:08:46.688 19:06:56 nvme -- nvme/nvme.sh@97 -- # uname 00:08:46.688 19:06:56 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:46.688 19:06:56 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.688 19:06:56 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:46.688 19:06:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.688 19:06:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.688 ************************************ 00:08:46.688 START TEST nvme_multi_aen 00:08:46.688 ************************************ 00:08:46.688 19:06:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.688 [2024-11-27 19:06:56.302619] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.302678] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.302689] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.304298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.304339] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.304349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.305586] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.305616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.305624] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.306756] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.306782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 [2024-11-27 19:06:56.306790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63258) is not found. Dropping the request. 00:08:46.688 Child process pid: 63784 00:08:46.947 [Child] Asynchronous Event Request test 00:08:46.947 [Child] Attached to 0000:00:10.0 00:08:46.947 [Child] Attached to 0000:00:11.0 00:08:46.947 [Child] Attached to 0000:00:13.0 00:08:46.947 [Child] Attached to 0000:00:12.0 00:08:46.947 [Child] Registering asynchronous event callbacks... 00:08:46.947 [Child] Getting orig temperature thresholds of all controllers 00:08:46.947 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:46.947 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 [Child] Cleaning up... 00:08:46.947 Asynchronous Event Request test 00:08:46.947 Attached to 0000:00:10.0 00:08:46.947 Attached to 0000:00:11.0 00:08:46.947 Attached to 0000:00:13.0 00:08:46.947 Attached to 0000:00:12.0 00:08:46.947 Reset controller to setup AER completions for this process 00:08:46.947 Registering asynchronous event callbacks... 00:08:46.947 Getting orig temperature thresholds of all controllers 00:08:46.947 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.947 Setting all controllers temperature threshold low to trigger AER 00:08:46.947 Waiting for all controllers temperature threshold to be set lower 00:08:46.947 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:46.947 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:46.947 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:46.947 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.947 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:46.947 Waiting for all controllers to trigger AER and reset threshold 00:08:46.947 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.947 Cleaning up... 00:08:46.947 00:08:46.948 real 0m0.447s 00:08:46.948 user 0m0.137s 00:08:46.948 sys 0m0.202s 00:08:46.948 19:06:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.948 ************************************ 00:08:46.948 END TEST nvme_multi_aen 00:08:46.948 ************************************ 00:08:46.948 19:06:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:47.208 19:06:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:47.208 19:06:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.208 19:06:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.208 19:06:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.208 ************************************ 00:08:47.208 START TEST nvme_startup 00:08:47.208 ************************************ 00:08:47.208 19:06:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:47.208 Initializing NVMe Controllers 00:08:47.208 Attached to 0000:00:10.0 00:08:47.208 Attached to 0000:00:11.0 00:08:47.208 Attached to 0000:00:13.0 00:08:47.208 Attached to 0000:00:12.0 00:08:47.208 Initialization complete. 00:08:47.208 Time used:142039.359 (us). 00:08:47.208 00:08:47.208 real 0m0.201s 00:08:47.208 user 0m0.068s 00:08:47.208 sys 0m0.090s 00:08:47.208 19:06:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.208 ************************************ 00:08:47.208 19:06:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:47.209 END TEST nvme_startup 00:08:47.209 ************************************ 00:08:47.469 19:06:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:47.469 19:06:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.469 19:06:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.469 19:06:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.469 ************************************ 00:08:47.469 START TEST nvme_multi_secondary 00:08:47.469 ************************************ 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63829 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63830 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:47.469 19:06:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:50.767 Initializing NVMe Controllers 00:08:50.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:50.767 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:50.767 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:50.767 Initialization complete. Launching workers. 00:08:50.767 ======================================================== 00:08:50.767 Latency(us) 00:08:50.767 Device Information : IOPS MiB/s Average min max 00:08:50.767 PCIE (0000:00:10.0) NSID 1 from core 2: 1504.56 5.88 10631.90 1736.18 34419.94 00:08:50.767 PCIE (0000:00:11.0) NSID 1 from core 2: 1504.56 5.88 10634.44 1773.70 30785.31 00:08:50.767 PCIE (0000:00:13.0) NSID 1 from core 2: 1504.56 5.88 10635.97 1572.18 30907.00 00:08:50.767 PCIE (0000:00:12.0) NSID 1 from core 2: 1504.56 5.88 10637.13 1535.76 26562.91 00:08:50.767 PCIE (0000:00:12.0) NSID 2 from core 2: 1504.56 5.88 10639.59 1488.61 33882.97 00:08:50.767 PCIE (0000:00:12.0) NSID 3 from core 2: 1504.56 5.88 10657.50 1737.93 33548.46 00:08:50.767 ======================================================== 00:08:50.767 Total : 9027.38 35.26 10639.42 1488.61 34419.94 00:08:50.767 00:08:50.767 19:07:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63829 00:08:50.767 Initializing NVMe Controllers 00:08:50.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:50.767 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:50.767 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:50.767 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:50.767 Initialization complete. Launching workers. 00:08:50.767 ======================================================== 00:08:50.767 Latency(us) 00:08:50.767 Device Information : IOPS MiB/s Average min max 00:08:50.767 PCIE (0000:00:10.0) NSID 1 from core 1: 3667.78 14.33 4360.51 1365.16 12265.40 00:08:50.767 PCIE (0000:00:11.0) NSID 1 from core 1: 3667.78 14.33 4361.73 1407.20 13122.32 00:08:50.767 PCIE (0000:00:13.0) NSID 1 from core 1: 3667.78 14.33 4361.84 1343.16 12945.27 00:08:50.767 PCIE (0000:00:12.0) NSID 1 from core 1: 3667.78 14.33 4361.84 1359.92 12290.29 00:08:50.767 PCIE (0000:00:12.0) NSID 2 from core 1: 3667.78 14.33 4361.83 1303.98 11676.81 00:08:50.767 PCIE (0000:00:12.0) NSID 3 from core 1: 3667.78 14.33 4361.83 1391.94 13078.59 00:08:50.767 ======================================================== 00:08:50.767 Total : 22006.65 85.96 4361.60 1303.98 13122.32 00:08:50.767 00:08:52.679 Initializing NVMe Controllers 00:08:52.679 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.679 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.679 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.679 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.679 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:52.679 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:52.679 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:52.679 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:52.679 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:52.679 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:52.679 Initialization complete. Launching workers. 00:08:52.679 ======================================================== 00:08:52.679 Latency(us) 00:08:52.679 Device Information : IOPS MiB/s Average min max 00:08:52.679 PCIE (0000:00:10.0) NSID 1 from core 0: 4683.01 18.29 3415.01 881.06 13619.37 00:08:52.679 PCIE (0000:00:11.0) NSID 1 from core 0: 4683.01 18.29 3416.13 911.82 11864.05 00:08:52.679 PCIE (0000:00:13.0) NSID 1 from core 0: 4683.01 18.29 3416.06 919.74 12239.08 00:08:52.679 PCIE (0000:00:12.0) NSID 1 from core 0: 4683.01 18.29 3416.01 904.31 13414.55 00:08:52.679 PCIE (0000:00:12.0) NSID 2 from core 0: 4683.01 18.29 3415.97 903.65 14077.01 00:08:52.679 PCIE (0000:00:12.0) NSID 3 from core 0: 4683.01 18.29 3415.92 899.94 14812.81 00:08:52.679 ======================================================== 00:08:52.679 Total : 28098.06 109.76 3415.85 881.06 14812.81 00:08:52.679 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63830 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63899 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63900 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:52.679 19:07:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:55.995 Initializing NVMe Controllers 00:08:55.995 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.995 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:55.995 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:55.995 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:55.995 Initialization complete. Launching workers. 00:08:55.995 ======================================================== 00:08:55.995 Latency(us) 00:08:55.995 Device Information : IOPS MiB/s Average min max 00:08:55.995 PCIE (0000:00:10.0) NSID 1 from core 1: 6938.46 27.10 2304.56 1099.25 11035.91 00:08:55.995 PCIE (0000:00:11.0) NSID 1 from core 1: 6938.46 27.10 2305.59 1176.02 11461.91 00:08:55.995 PCIE (0000:00:13.0) NSID 1 from core 1: 6938.46 27.10 2305.54 1140.20 12887.97 00:08:55.995 PCIE (0000:00:12.0) NSID 1 from core 1: 6938.46 27.10 2305.55 1136.14 12792.68 00:08:55.995 PCIE (0000:00:12.0) NSID 2 from core 1: 6938.46 27.10 2305.63 1159.09 12854.07 00:08:55.995 PCIE (0000:00:12.0) NSID 3 from core 1: 6938.46 27.10 2305.60 1090.49 10939.37 00:08:55.995 ======================================================== 00:08:55.995 Total : 41630.74 162.62 2305.41 1090.49 12887.97 00:08:55.995 00:08:55.995 Initializing NVMe Controllers 00:08:55.995 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.995 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.995 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.995 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.995 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.995 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.995 Initialization complete. Launching workers. 00:08:55.995 ======================================================== 00:08:55.995 Latency(us) 00:08:55.995 Device Information : IOPS MiB/s Average min max 00:08:55.995 PCIE (0000:00:10.0) NSID 1 from core 0: 7087.80 27.69 2255.93 825.22 12695.20 00:08:55.995 PCIE (0000:00:11.0) NSID 1 from core 0: 7087.80 27.69 2256.77 841.16 12384.23 00:08:55.995 PCIE (0000:00:13.0) NSID 1 from core 0: 7087.80 27.69 2256.63 757.85 11428.71 00:08:55.995 PCIE (0000:00:12.0) NSID 1 from core 0: 7087.80 27.69 2256.51 704.38 12568.45 00:08:55.995 PCIE (0000:00:12.0) NSID 2 from core 0: 7087.80 27.69 2256.41 657.61 12988.54 00:08:55.995 PCIE (0000:00:12.0) NSID 3 from core 0: 7087.80 27.69 2256.31 629.50 13095.21 00:08:55.995 ======================================================== 00:08:55.995 Total : 42526.81 166.12 2256.42 629.50 13095.21 00:08:55.995 00:08:57.898 Initializing NVMe Controllers 00:08:57.898 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.898 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.898 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.898 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.898 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:57.898 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:57.898 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:57.898 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:57.898 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:57.898 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:57.898 Initialization complete. Launching workers. 00:08:57.898 ======================================================== 00:08:57.898 Latency(us) 00:08:57.898 Device Information : IOPS MiB/s Average min max 00:08:57.898 PCIE (0000:00:10.0) NSID 1 from core 2: 4159.64 16.25 3845.89 821.19 21600.53 00:08:57.898 PCIE (0000:00:11.0) NSID 1 from core 2: 4159.64 16.25 3848.72 758.96 20257.27 00:08:57.898 PCIE (0000:00:13.0) NSID 1 from core 2: 4159.64 16.25 3848.83 845.09 20818.34 00:08:57.898 PCIE (0000:00:12.0) NSID 1 from core 2: 4159.64 16.25 3848.74 835.41 21271.10 00:08:57.898 PCIE (0000:00:12.0) NSID 2 from core 2: 4159.64 16.25 3848.48 820.01 24288.28 00:08:57.898 PCIE (0000:00:12.0) NSID 3 from core 2: 4159.64 16.25 3848.21 849.82 21101.52 00:08:57.898 ======================================================== 00:08:57.898 Total : 24957.82 97.49 3848.15 758.96 24288.28 00:08:57.898 00:08:57.898 19:07:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63899 00:08:57.899 19:07:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63900 00:08:57.899 00:08:57.899 real 0m10.586s 00:08:57.899 user 0m18.356s 00:08:57.899 sys 0m0.811s 00:08:57.899 19:07:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.899 19:07:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:57.899 ************************************ 00:08:57.899 END TEST nvme_multi_secondary 00:08:57.899 ************************************ 00:08:57.899 19:07:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:57.899 19:07:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:57.899 19:07:07 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62862 ]] 00:08:57.899 19:07:07 nvme -- common/autotest_common.sh@1094 -- # kill 62862 00:08:57.899 19:07:07 nvme -- common/autotest_common.sh@1095 -- # wait 62862 00:08:57.899 [2024-11-27 19:07:07.483667] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.483759] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.483797] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.483821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.486884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.486954] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.486976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.486999] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.489541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.489576] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.489596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.489608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.490990] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.491026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.491036] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:57.899 [2024-11-27 19:07:07.491046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63777) is not found. Dropping the request. 00:08:58.161 19:07:07 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:58.161 19:07:07 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:58.161 19:07:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:58.161 19:07:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.161 19:07:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.161 19:07:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.161 ************************************ 00:08:58.161 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:58.161 ************************************ 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:58.161 * Looking for test storage... 00:08:58.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:58.161 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.162 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64067 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64067 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64067 ']' 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.421 19:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.421 [2024-11-27 19:07:07.911453] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:58.421 [2024-11-27 19:07:07.911896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64067 ] 00:08:58.680 [2024-11-27 19:07:08.084454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.680 [2024-11-27 19:07:08.195679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.680 [2024-11-27 19:07:08.195832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.680 [2024-11-27 19:07:08.195951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.680 [2024-11-27 19:07:08.196119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.247 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.247 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:59.247 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:59.247 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.247 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.505 nvme0n1 00:08:59.505 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.505 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:59.505 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_dKWmO.txt 00:08:59.505 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.506 true 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732734428 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64090 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:59.506 19:07:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.404 [2024-11-27 19:07:10.931244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:01.404 [2024-11-27 19:07:10.931915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:01.404 [2024-11-27 19:07:10.931955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:01.404 [2024-11-27 19:07:10.931971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.404 [2024-11-27 19:07:10.933764] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.404 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64090 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64090 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64090 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:01.404 19:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_dKWmO.txt 00:09:01.404 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:01.404 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:01.404 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_dKWmO.txt 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64067 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64067 ']' 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64067 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.405 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64067 00:09:01.663 killing process with pid 64067 00:09:01.663 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.663 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.663 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64067' 00:09:01.663 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64067 00:09:01.663 19:07:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64067 00:09:03.038 19:07:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:03.038 19:07:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:03.038 00:09:03.038 real 0m4.781s 00:09:03.038 user 0m16.891s 00:09:03.038 sys 0m0.563s 00:09:03.038 19:07:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.038 ************************************ 00:09:03.038 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:03.038 19:07:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.038 ************************************ 00:09:03.038 19:07:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:03.038 19:07:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:03.038 19:07:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.038 19:07:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.038 19:07:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.038 ************************************ 00:09:03.038 START TEST nvme_fio 00:09:03.038 ************************************ 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:03.038 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.038 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.299 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.299 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.560 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.560 19:07:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.560 19:07:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.560 19:07:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.560 19:07:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.560 19:07:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.560 19:07:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.560 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.560 fio-3.35 00:09:03.560 Starting 1 thread 00:09:10.139 00:09:10.139 test: (groupid=0, jobs=1): err= 0: pid=64224: Wed Nov 27 19:07:19 2024 00:09:10.139 read: IOPS=23.5k, BW=91.7MiB/s (96.1MB/s)(183MiB/2001msec) 00:09:10.139 slat (usec): min=4, max=298, avg= 4.80, stdev= 2.26 00:09:10.139 clat (usec): min=200, max=8950, avg=2717.88, stdev=667.97 00:09:10.139 lat (usec): min=205, max=9000, avg=2722.68, stdev=669.02 00:09:10.139 clat percentiles (usec): 00:09:10.139 | 1.00th=[ 2024], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:10.139 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:10.139 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 3163], 95.00th=[ 3687], 00:09:10.139 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 6915], 99.95th=[ 7111], 00:09:10.139 | 99.99th=[ 8717] 00:09:10.139 bw ( KiB/s): min=90120, max=96048, per=98.89%, avg=92850.67, stdev=2991.43, samples=3 00:09:10.139 iops : min=22530, max=24012, avg=23212.67, stdev=747.86, samples=3 00:09:10.139 write: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec); 0 zone resets 00:09:10.140 slat (nsec): min=4293, max=51473, avg=5052.58, stdev=1651.51 00:09:10.140 clat (usec): min=222, max=8770, avg=2730.66, stdev=670.80 00:09:10.140 lat (usec): min=226, max=8784, avg=2735.71, stdev=671.75 00:09:10.140 clat percentiles (usec): 00:09:10.140 | 1.00th=[ 2040], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:10.140 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:09:10.140 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3163], 95.00th=[ 3687], 00:09:10.140 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 6915], 99.95th=[ 7111], 00:09:10.140 | 99.99th=[ 8586] 00:09:10.140 bw ( KiB/s): min=89528, max=95304, per=99.68%, avg=92952.00, stdev=3033.55, samples=3 00:09:10.140 iops : min=22382, max=23826, avg=23238.00, stdev=758.39, samples=3 00:09:10.140 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:10.140 lat (msec) : 2=0.74%, 4=95.30%, 10=3.92% 00:09:10.140 cpu : usr=99.20%, sys=0.10%, ctx=6, majf=0, minf=606 00:09:10.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:10.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.140 issued rwts: total=46970,46647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.140 00:09:10.140 Run status group 0 (all jobs): 00:09:10.140 READ: bw=91.7MiB/s (96.1MB/s), 91.7MiB/s-91.7MiB/s (96.1MB/s-96.1MB/s), io=183MiB (192MB), run=2001-2001msec 00:09:10.140 WRITE: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:09:10.140 ----------------------------------------------------- 00:09:10.140 Suppressions used: 00:09:10.140 count bytes template 00:09:10.140 1 32 /usr/src/fio/parse.c 00:09:10.140 1 8 libtcmalloc_minimal.so 00:09:10.140 ----------------------------------------------------- 00:09:10.140 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.140 19:07:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.140 19:07:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:10.400 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.400 fio-3.35 00:09:10.400 Starting 1 thread 00:09:16.988 00:09:16.988 test: (groupid=0, jobs=1): err= 0: pid=64285: Wed Nov 27 19:07:26 2024 00:09:16.988 read: IOPS=22.0k, BW=86.1MiB/s (90.2MB/s)(172MiB/2001msec) 00:09:16.988 slat (nsec): min=4208, max=54949, avg=5196.88, stdev=2430.04 00:09:16.988 clat (usec): min=341, max=12770, avg=2892.42, stdev=927.63 00:09:16.988 lat (usec): min=345, max=12815, avg=2897.62, stdev=929.16 00:09:16.988 clat percentiles (usec): 00:09:16.988 | 1.00th=[ 2040], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:16.988 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:16.988 | 70.00th=[ 2835], 80.00th=[ 3032], 90.00th=[ 3523], 95.00th=[ 5276], 00:09:16.988 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 9372], 00:09:16.988 | 99.99th=[12387] 00:09:16.988 bw ( KiB/s): min=80248, max=91712, per=99.14%, avg=87365.33, stdev=6213.96, samples=3 00:09:16.988 iops : min=20062, max=22928, avg=21841.33, stdev=1553.49, samples=3 00:09:16.988 write: IOPS=21.9k, BW=85.5MiB/s (89.7MB/s)(171MiB/2001msec); 0 zone resets 00:09:16.988 slat (nsec): min=4332, max=80194, avg=5481.77, stdev=2597.10 00:09:16.988 clat (usec): min=324, max=12523, avg=2912.52, stdev=940.24 00:09:16.988 lat (usec): min=329, max=12536, avg=2918.00, stdev=941.83 00:09:16.988 clat percentiles (usec): 00:09:16.988 | 1.00th=[ 2057], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:16.988 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:16.988 | 70.00th=[ 2868], 80.00th=[ 3032], 90.00th=[ 3556], 95.00th=[ 5342], 00:09:16.988 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 9896], 00:09:16.988 | 99.99th=[11994] 00:09:16.988 bw ( KiB/s): min=80152, max=92192, per=99.96%, avg=87517.33, stdev=6455.24, samples=3 00:09:16.988 iops : min=20038, max=23048, avg=21879.33, stdev=1613.81, samples=3 00:09:16.988 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:09:16.988 lat (msec) : 2=0.56%, 4=91.28%, 10=8.06%, 20=0.05% 00:09:16.988 cpu : usr=99.10%, sys=0.15%, ctx=3, majf=0, minf=606 00:09:16.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.988 issued rwts: total=44083,43799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.988 00:09:16.988 Run status group 0 (all jobs): 00:09:16.988 READ: bw=86.1MiB/s (90.2MB/s), 86.1MiB/s-86.1MiB/s (90.2MB/s-90.2MB/s), io=172MiB (181MB), run=2001-2001msec 00:09:16.988 WRITE: bw=85.5MiB/s (89.7MB/s), 85.5MiB/s-85.5MiB/s (89.7MB/s-89.7MB/s), io=171MiB (179MB), run=2001-2001msec 00:09:16.988 ----------------------------------------------------- 00:09:16.988 Suppressions used: 00:09:16.988 count bytes template 00:09:16.988 1 32 /usr/src/fio/parse.c 00:09:16.988 1 8 libtcmalloc_minimal.so 00:09:16.988 ----------------------------------------------------- 00:09:16.988 00:09:16.988 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.988 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.988 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.988 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.246 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:17.246 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.505 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.505 19:07:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:17.505 19:07:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:17.505 19:07:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.505 19:07:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.505 19:07:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:17.505 19:07:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.505 19:07:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.762 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.762 fio-3.35 00:09:17.762 Starting 1 thread 00:09:24.318 00:09:24.318 test: (groupid=0, jobs=1): err= 0: pid=64346: Wed Nov 27 19:07:33 2024 00:09:24.318 read: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec) 00:09:24.318 slat (nsec): min=3458, max=72251, avg=5162.31, stdev=2434.92 00:09:24.318 clat (usec): min=211, max=8619, avg=2894.79, stdev=820.81 00:09:24.318 lat (usec): min=215, max=8692, avg=2899.95, stdev=822.30 00:09:24.318 clat percentiles (usec): 00:09:24.318 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:24.318 | 30.00th=[ 2606], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:24.318 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3458], 95.00th=[ 5080], 00:09:24.318 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 6849], 99.95th=[ 7046], 00:09:24.318 | 99.99th=[ 8455] 00:09:24.318 bw ( KiB/s): min=80590, max=95288, per=100.00%, avg=88452.67, stdev=7402.66, samples=3 00:09:24.318 iops : min=20147, max=23822, avg=22113.00, stdev=1850.93, samples=3 00:09:24.318 write: IOPS=21.9k, BW=85.6MiB/s (89.7MB/s)(171MiB/2001msec); 0 zone resets 00:09:24.318 slat (nsec): min=3548, max=59671, avg=5554.70, stdev=2443.24 00:09:24.318 clat (usec): min=240, max=8452, avg=2905.16, stdev=838.57 00:09:24.318 lat (usec): min=245, max=8493, avg=2910.72, stdev=840.10 00:09:24.318 clat percentiles (usec): 00:09:24.318 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:24.318 | 30.00th=[ 2606], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:24.318 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3458], 95.00th=[ 5211], 00:09:24.318 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 6915], 99.95th=[ 7046], 00:09:24.318 | 99.99th=[ 8094] 00:09:24.318 bw ( KiB/s): min=80510, max=95840, per=100.00%, avg=88626.00, stdev=7704.70, samples=3 00:09:24.318 iops : min=20127, max=23960, avg=22156.33, stdev=1926.44, samples=3 00:09:24.318 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:24.318 lat (msec) : 2=0.35%, 4=91.71%, 10=7.89% 00:09:24.318 cpu : usr=99.20%, sys=0.05%, ctx=6, majf=0, minf=607 00:09:24.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.319 issued rwts: total=44115,43826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.319 00:09:24.319 Run status group 0 (all jobs): 00:09:24.319 READ: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:09:24.319 WRITE: bw=85.6MiB/s (89.7MB/s), 85.6MiB/s-85.6MiB/s (89.7MB/s-89.7MB/s), io=171MiB (180MB), run=2001-2001msec 00:09:24.319 ----------------------------------------------------- 00:09:24.319 Suppressions used: 00:09:24.319 count bytes template 00:09:24.319 1 32 /usr/src/fio/parse.c 00:09:24.319 1 8 libtcmalloc_minimal.so 00:09:24.319 ----------------------------------------------------- 00:09:24.319 00:09:24.319 19:07:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.319 19:07:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:24.319 19:07:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.319 19:07:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:24.576 19:07:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:24.576 19:07:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.835 19:07:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.835 19:07:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.835 19:07:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.835 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.835 fio-3.35 00:09:24.835 Starting 1 thread 00:09:34.799 00:09:34.799 test: (groupid=0, jobs=1): err= 0: pid=64402: Wed Nov 27 19:07:42 2024 00:09:34.799 read: IOPS=21.8k, BW=85.3MiB/s (89.4MB/s)(171MiB/2001msec) 00:09:34.799 slat (nsec): min=4806, max=82010, avg=5658.28, stdev=2000.37 00:09:34.799 clat (usec): min=265, max=8872, avg=2927.91, stdev=713.85 00:09:34.799 lat (usec): min=271, max=8885, avg=2933.57, stdev=714.88 00:09:34.799 clat percentiles (usec): 00:09:34.799 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:09:34.799 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:34.799 | 70.00th=[ 2737], 80.00th=[ 3097], 90.00th=[ 3687], 95.00th=[ 4178], 00:09:34.799 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7898], 99.95th=[ 8586], 00:09:34.799 | 99.99th=[ 8848] 00:09:34.799 bw ( KiB/s): min=83072, max=88600, per=99.20%, avg=86613.33, stdev=3074.48, samples=3 00:09:34.799 iops : min=20768, max=22150, avg=21653.33, stdev=768.62, samples=3 00:09:34.799 write: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(169MiB/2001msec); 0 zone resets 00:09:34.799 slat (nsec): min=4945, max=54735, avg=6054.53, stdev=1984.60 00:09:34.799 clat (usec): min=202, max=8902, avg=2934.34, stdev=721.75 00:09:34.799 lat (usec): min=207, max=8915, avg=2940.40, stdev=722.79 00:09:34.799 clat percentiles (usec): 00:09:34.799 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:09:34.799 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:34.799 | 70.00th=[ 2737], 80.00th=[ 3130], 90.00th=[ 3687], 95.00th=[ 4228], 00:09:34.799 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7701], 99.95th=[ 8455], 00:09:34.799 | 99.99th=[ 8848] 00:09:34.799 bw ( KiB/s): min=83080, max=89496, per=100.00%, avg=86821.33, stdev=3338.35, samples=3 00:09:34.799 iops : min=20770, max=22374, avg=21705.33, stdev=834.59, samples=3 00:09:34.799 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:34.799 lat (msec) : 2=0.07%, 4=93.75%, 10=6.13% 00:09:34.799 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=604 00:09:34.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.799 issued rwts: total=43676,43374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.799 00:09:34.799 Run status group 0 (all jobs): 00:09:34.799 READ: bw=85.3MiB/s (89.4MB/s), 85.3MiB/s-85.3MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:09:34.799 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=169MiB (178MB), run=2001-2001msec 00:09:34.799 ----------------------------------------------------- 00:09:34.799 Suppressions used: 00:09:34.799 count bytes template 00:09:34.799 1 32 /usr/src/fio/parse.c 00:09:34.799 1 8 libtcmalloc_minimal.so 00:09:34.799 ----------------------------------------------------- 00:09:34.799 00:09:34.799 19:07:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.799 19:07:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:34.799 00:09:34.799 real 0m30.550s 00:09:34.799 user 0m22.910s 00:09:34.799 sys 0m11.885s 00:09:34.799 19:07:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.799 ************************************ 00:09:34.799 END TEST nvme_fio 00:09:34.799 ************************************ 00:09:34.799 19:07:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:34.799 ************************************ 00:09:34.799 END TEST nvme 00:09:34.799 ************************************ 00:09:34.799 00:09:34.799 real 1m39.881s 00:09:34.799 user 3m43.888s 00:09:34.799 sys 0m22.719s 00:09:34.799 19:07:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.799 19:07:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.799 19:07:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:34.799 19:07:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.799 19:07:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.799 19:07:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.799 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:09:34.799 ************************************ 00:09:34.799 START TEST nvme_scc 00:09:34.799 ************************************ 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.799 * Looking for test storage... 00:09:34.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.799 --rc genhtml_branch_coverage=1 00:09:34.799 --rc genhtml_function_coverage=1 00:09:34.799 --rc genhtml_legend=1 00:09:34.799 --rc geninfo_all_blocks=1 00:09:34.799 --rc geninfo_unexecuted_blocks=1 00:09:34.799 00:09:34.799 ' 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.799 --rc genhtml_branch_coverage=1 00:09:34.799 --rc genhtml_function_coverage=1 00:09:34.799 --rc genhtml_legend=1 00:09:34.799 --rc geninfo_all_blocks=1 00:09:34.799 --rc geninfo_unexecuted_blocks=1 00:09:34.799 00:09:34.799 ' 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.799 --rc genhtml_branch_coverage=1 00:09:34.799 --rc genhtml_function_coverage=1 00:09:34.799 --rc genhtml_legend=1 00:09:34.799 --rc geninfo_all_blocks=1 00:09:34.799 --rc geninfo_unexecuted_blocks=1 00:09:34.799 00:09:34.799 ' 00:09:34.799 19:07:43 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.799 --rc genhtml_branch_coverage=1 00:09:34.799 --rc genhtml_function_coverage=1 00:09:34.799 --rc genhtml_legend=1 00:09:34.799 --rc geninfo_all_blocks=1 00:09:34.799 --rc geninfo_unexecuted_blocks=1 00:09:34.799 00:09:34.799 ' 00:09:34.799 19:07:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.799 19:07:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.799 19:07:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:34.799 19:07:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:34.799 19:07:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.799 19:07:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.799 19:07:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.800 19:07:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.800 19:07:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.800 19:07:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:34.800 19:07:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:34.800 19:07:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:34.800 19:07:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.800 19:07:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:34.800 19:07:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:34.800 19:07:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:34.800 19:07:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.800 Waiting for block devices as requested 00:09:34.800 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.800 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.079 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.079 19:07:48 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:40.079 19:07:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.079 19:07:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.079 19:07:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.079 19:07:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:40.081 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.083 19:07:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:40.083 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:40.084 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.085 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.086 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:40.087 19:07:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.087 19:07:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.087 19:07:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.087 19:07:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.087 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.088 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.089 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.090 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.091 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.092 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.093 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.094 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.095 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:40.096 19:07:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.096 19:07:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.096 19:07:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.096 19:07:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.096 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.097 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.098 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:40.099 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:40.100 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.101 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.102 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:40.103 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:40.104 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:40.105 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.106 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:40.107 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.108 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.109 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.110 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:40.111 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.112 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:40.113 19:07:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.113 19:07:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.113 19:07:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.113 19:07:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.113 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.114 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.115 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.116 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:40.117 19:07:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:40.117 19:07:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:40.117 19:07:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.944 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.944 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.944 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.944 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.944 19:07:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:40.944 19:07:50 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:40.944 19:07:50 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.944 19:07:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:40.944 ************************************ 00:09:40.944 START TEST nvme_simple_copy 00:09:40.944 ************************************ 00:09:40.944 19:07:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:41.202 Initializing NVMe Controllers 00:09:41.202 Attaching to 0000:00:10.0 00:09:41.202 Controller supports SCC. Attached to 0000:00:10.0 00:09:41.202 Namespace ID: 1 size: 6GB 00:09:41.202 Initialization complete. 00:09:41.202 00:09:41.202 Controller QEMU NVMe Ctrl (12340 ) 00:09:41.202 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:41.202 Namespace Block Size:4096 00:09:41.202 Writing LBAs 0 to 63 with Random Data 00:09:41.202 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:41.202 LBAs matching Written Data: 64 00:09:41.202 00:09:41.202 real 0m0.260s 00:09:41.202 user 0m0.097s 00:09:41.202 sys 0m0.062s 00:09:41.202 19:07:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.202 19:07:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 ************************************ 00:09:41.202 END TEST nvme_simple_copy 00:09:41.202 ************************************ 00:09:41.202 ************************************ 00:09:41.202 END TEST nvme_scc 00:09:41.202 ************************************ 00:09:41.202 00:09:41.202 real 0m7.676s 00:09:41.202 user 0m1.126s 00:09:41.202 sys 0m1.390s 00:09:41.202 19:07:50 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.202 19:07:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 19:07:50 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:41.202 19:07:50 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:41.202 19:07:50 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:41.202 19:07:50 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:41.202 19:07:50 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:41.202 19:07:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.202 19:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.202 19:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.202 ************************************ 00:09:41.202 START TEST nvme_fdp 00:09:41.202 ************************************ 00:09:41.202 19:07:50 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:41.469 * Looking for test storage... 00:09:41.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.469 19:07:50 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.469 --rc genhtml_branch_coverage=1 00:09:41.469 --rc genhtml_function_coverage=1 00:09:41.469 --rc genhtml_legend=1 00:09:41.469 --rc geninfo_all_blocks=1 00:09:41.469 --rc geninfo_unexecuted_blocks=1 00:09:41.469 00:09:41.469 ' 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.469 --rc genhtml_branch_coverage=1 00:09:41.469 --rc genhtml_function_coverage=1 00:09:41.469 --rc genhtml_legend=1 00:09:41.469 --rc geninfo_all_blocks=1 00:09:41.469 --rc geninfo_unexecuted_blocks=1 00:09:41.469 00:09:41.469 ' 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.469 --rc genhtml_branch_coverage=1 00:09:41.469 --rc genhtml_function_coverage=1 00:09:41.469 --rc genhtml_legend=1 00:09:41.469 --rc geninfo_all_blocks=1 00:09:41.469 --rc geninfo_unexecuted_blocks=1 00:09:41.469 00:09:41.469 ' 00:09:41.469 19:07:50 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.469 --rc genhtml_branch_coverage=1 00:09:41.469 --rc genhtml_function_coverage=1 00:09:41.469 --rc genhtml_legend=1 00:09:41.470 --rc geninfo_all_blocks=1 00:09:41.470 --rc geninfo_unexecuted_blocks=1 00:09:41.470 00:09:41.470 ' 00:09:41.470 19:07:50 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.470 19:07:50 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.470 19:07:50 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.470 19:07:50 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.470 19:07:50 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.470 19:07:50 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.470 19:07:50 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.470 19:07:50 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.470 19:07:50 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:41.470 19:07:50 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:41.470 19:07:50 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:41.470 19:07:50 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.470 19:07:50 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.772 Waiting for block devices as requested 00:09:42.057 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.057 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.057 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.057 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.332 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:47.332 19:07:56 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:47.332 19:07:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.332 19:07:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:47.332 19:07:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.332 19:07:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:47.332 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:47.333 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.334 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.335 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.336 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.337 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.338 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:47.339 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:47.340 19:07:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.340 19:07:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:47.340 19:07:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.340 19:07:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.340 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.341 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.342 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:47.343 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:47.344 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:47.345 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.346 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.347 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:47.348 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:47.349 19:07:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.349 19:07:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:47.349 19:07:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.349 19:07:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.349 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:47.350 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.351 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.352 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.353 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.354 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.355 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:47.619 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.620 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:47.621 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:47.622 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.623 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.624 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.625 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:47.626 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:47.627 19:07:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.627 19:07:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:47.627 19:07:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.627 19:07:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:47.627 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:47.628 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:47.629 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:47.630 19:07:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:47.630 19:07:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:47.630 19:07:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.714 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.714 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.714 19:07:58 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:48.714 19:07:58 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:48.714 19:07:58 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.714 19:07:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:48.714 ************************************ 00:09:48.714 START TEST nvme_flexible_data_placement 00:09:48.714 ************************************ 00:09:48.714 19:07:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:48.972 Initializing NVMe Controllers 00:09:48.972 Attaching to 0000:00:13.0 00:09:48.972 Controller supports FDP Attached to 0000:00:13.0 00:09:48.972 Namespace ID: 1 Endurance Group ID: 1 00:09:48.972 Initialization complete. 00:09:48.972 00:09:48.972 ================================== 00:09:48.972 == FDP tests for Namespace: #01 == 00:09:48.972 ================================== 00:09:48.972 00:09:48.972 Get Feature: FDP: 00:09:48.972 ================= 00:09:48.972 Enabled: Yes 00:09:48.972 FDP configuration Index: 0 00:09:48.972 00:09:48.972 FDP configurations log page 00:09:48.972 =========================== 00:09:48.972 Number of FDP configurations: 1 00:09:48.972 Version: 0 00:09:48.972 Size: 112 00:09:48.972 FDP Configuration Descriptor: 0 00:09:48.972 Descriptor Size: 96 00:09:48.972 Reclaim Group Identifier format: 2 00:09:48.972 FDP Volatile Write Cache: Not Present 00:09:48.972 FDP Configuration: Valid 00:09:48.972 Vendor Specific Size: 0 00:09:48.972 Number of Reclaim Groups: 2 00:09:48.972 Number of Recalim Unit Handles: 8 00:09:48.972 Max Placement Identifiers: 128 00:09:48.972 Number of Namespaces Suppprted: 256 00:09:48.972 Reclaim unit Nominal Size: 6000000 bytes 00:09:48.972 Estimated Reclaim Unit Time Limit: Not Reported 00:09:48.972 RUH Desc #000: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #001: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #002: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #003: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #004: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #005: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #006: RUH Type: Initially Isolated 00:09:48.972 RUH Desc #007: RUH Type: Initially Isolated 00:09:48.972 00:09:48.972 FDP reclaim unit handle usage log page 00:09:48.972 ====================================== 00:09:48.972 Number of Reclaim Unit Handles: 8 00:09:48.972 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:48.972 RUH Usage Desc #001: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #002: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #003: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #004: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #005: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #006: RUH Attributes: Unused 00:09:48.972 RUH Usage Desc #007: RUH Attributes: Unused 00:09:48.972 00:09:48.972 FDP statistics log page 00:09:48.972 ======================= 00:09:48.972 Host bytes with metadata written: 1075744768 00:09:48.972 Media bytes with metadata written: 1075859456 00:09:48.972 Media bytes erased: 0 00:09:48.972 00:09:48.972 FDP Reclaim unit handle status 00:09:48.972 ============================== 00:09:48.973 Number of RUHS descriptors: 2 00:09:48.973 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001e17 00:09:48.973 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:48.973 00:09:48.973 FDP write on placement id: 0 success 00:09:48.973 00:09:48.973 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:48.973 00:09:48.973 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:48.973 00:09:48.973 Get Feature: FDP Events for Placement handle: #0 00:09:48.973 ======================== 00:09:48.973 Number of FDP Events: 6 00:09:48.973 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:48.973 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:48.973 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:48.973 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:48.973 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:48.973 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:48.973 00:09:48.973 FDP events log page 00:09:48.973 =================== 00:09:48.973 Number of FDP events: 1 00:09:48.973 FDP Event #0: 00:09:48.973 Event Type: RU Not Written to Capacity 00:09:48.973 Placement Identifier: Valid 00:09:48.973 NSID: Valid 00:09:48.973 Location: Valid 00:09:48.973 Placement Identifier: 0 00:09:48.973 Event Timestamp: 6 00:09:48.973 Namespace Identifier: 1 00:09:48.973 Reclaim Group Identifier: 0 00:09:48.973 Reclaim Unit Handle Identifier: 0 00:09:48.973 00:09:48.973 FDP test passed 00:09:48.973 00:09:48.973 real 0m0.234s 00:09:48.973 user 0m0.068s 00:09:48.973 sys 0m0.065s 00:09:48.973 ************************************ 00:09:48.973 END TEST nvme_flexible_data_placement 00:09:48.973 ************************************ 00:09:48.973 19:07:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.973 19:07:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 ************************************ 00:09:48.973 END TEST nvme_fdp 00:09:48.973 ************************************ 00:09:48.973 00:09:48.973 real 0m7.649s 00:09:48.973 user 0m1.085s 00:09:48.973 sys 0m1.345s 00:09:48.973 19:07:58 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.973 19:07:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 19:07:58 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:48.973 19:07:58 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:48.973 19:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.973 19:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.973 19:07:58 -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 ************************************ 00:09:48.973 START TEST nvme_rpc 00:09:48.973 ************************************ 00:09:48.973 19:07:58 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:48.973 * Looking for test storage... 00:09:48.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.973 19:07:58 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.973 19:07:58 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.973 19:07:58 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.232 19:07:58 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.232 --rc genhtml_branch_coverage=1 00:09:49.232 --rc genhtml_function_coverage=1 00:09:49.232 --rc genhtml_legend=1 00:09:49.232 --rc geninfo_all_blocks=1 00:09:49.232 --rc geninfo_unexecuted_blocks=1 00:09:49.232 00:09:49.232 ' 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.232 --rc genhtml_branch_coverage=1 00:09:49.232 --rc genhtml_function_coverage=1 00:09:49.232 --rc genhtml_legend=1 00:09:49.232 --rc geninfo_all_blocks=1 00:09:49.232 --rc geninfo_unexecuted_blocks=1 00:09:49.232 00:09:49.232 ' 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.232 --rc genhtml_branch_coverage=1 00:09:49.232 --rc genhtml_function_coverage=1 00:09:49.232 --rc genhtml_legend=1 00:09:49.232 --rc geninfo_all_blocks=1 00:09:49.232 --rc geninfo_unexecuted_blocks=1 00:09:49.232 00:09:49.232 ' 00:09:49.232 19:07:58 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.232 --rc genhtml_branch_coverage=1 00:09:49.232 --rc genhtml_function_coverage=1 00:09:49.232 --rc genhtml_legend=1 00:09:49.232 --rc geninfo_all_blocks=1 00:09:49.232 --rc geninfo_unexecuted_blocks=1 00:09:49.232 00:09:49.232 ' 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:49.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65799 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65799 00:09:49.233 19:07:58 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65799 ']' 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.233 19:07:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.233 [2024-11-27 19:07:58.804346] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:49.233 [2024-11-27 19:07:58.804595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65799 ] 00:09:49.491 [2024-11-27 19:07:58.962093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.491 [2024-11-27 19:07:59.079745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.492 [2024-11-27 19:07:59.079805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.435 19:07:59 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.435 19:07:59 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.435 19:07:59 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:50.435 Nvme0n1 00:09:50.435 19:07:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:50.435 19:07:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:50.693 request: 00:09:50.693 { 00:09:50.693 "bdev_name": "Nvme0n1", 00:09:50.693 "filename": "non_existing_file", 00:09:50.693 "method": "bdev_nvme_apply_firmware", 00:09:50.693 "req_id": 1 00:09:50.693 } 00:09:50.693 Got JSON-RPC error response 00:09:50.693 response: 00:09:50.693 { 00:09:50.694 "code": -32603, 00:09:50.694 "message": "open file failed." 00:09:50.694 } 00:09:50.694 19:08:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:50.694 19:08:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:50.694 19:08:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:50.952 19:08:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:50.952 19:08:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65799 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65799 ']' 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65799 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65799 00:09:50.952 killing process with pid 65799 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65799' 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65799 00:09:50.952 19:08:00 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65799 00:09:52.329 00:09:52.329 real 0m3.274s 00:09:52.329 user 0m6.150s 00:09:52.329 sys 0m0.544s 00:09:52.329 ************************************ 00:09:52.329 END TEST nvme_rpc 00:09:52.329 ************************************ 00:09:52.330 19:08:01 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.330 19:08:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.330 19:08:01 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.330 19:08:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.330 19:08:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.330 19:08:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.330 ************************************ 00:09:52.330 START TEST nvme_rpc_timeouts 00:09:52.330 ************************************ 00:09:52.330 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.330 * Looking for test storage... 00:09:52.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.330 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.330 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.330 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:52.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.589 19:08:01 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.589 --rc genhtml_branch_coverage=1 00:09:52.589 --rc genhtml_function_coverage=1 00:09:52.589 --rc genhtml_legend=1 00:09:52.589 --rc geninfo_all_blocks=1 00:09:52.589 --rc geninfo_unexecuted_blocks=1 00:09:52.589 00:09:52.589 ' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.589 --rc genhtml_branch_coverage=1 00:09:52.589 --rc genhtml_function_coverage=1 00:09:52.589 --rc genhtml_legend=1 00:09:52.589 --rc geninfo_all_blocks=1 00:09:52.589 --rc geninfo_unexecuted_blocks=1 00:09:52.589 00:09:52.589 ' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.589 --rc genhtml_branch_coverage=1 00:09:52.589 --rc genhtml_function_coverage=1 00:09:52.589 --rc genhtml_legend=1 00:09:52.589 --rc geninfo_all_blocks=1 00:09:52.589 --rc geninfo_unexecuted_blocks=1 00:09:52.589 00:09:52.589 ' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.589 --rc genhtml_branch_coverage=1 00:09:52.589 --rc genhtml_function_coverage=1 00:09:52.589 --rc genhtml_legend=1 00:09:52.589 --rc geninfo_all_blocks=1 00:09:52.589 --rc geninfo_unexecuted_blocks=1 00:09:52.589 00:09:52.589 ' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65859 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65859 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65896 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65896 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65896 ']' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.589 19:08:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.589 19:08:01 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:52.589 [2024-11-27 19:08:02.067404] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:52.589 [2024-11-27 19:08:02.067675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65896 ] 00:09:52.589 [2024-11-27 19:08:02.221101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.848 [2024-11-27 19:08:02.341214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.848 [2024-11-27 19:08:02.341224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.415 19:08:02 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.415 19:08:02 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:53.415 19:08:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:53.415 Checking default timeout settings: 00:09:53.415 19:08:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:53.983 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:53.983 Making settings changes with rpc: 00:09:53.983 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:53.983 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:53.983 Check default vs. modified settings: 00:09:53.983 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.242 Setting action_on_timeout is changed as expected. 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 Setting timeout_us is changed as expected. 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65859 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.242 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.501 Setting timeout_admin_us is changed as expected. 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65859 /tmp/settings_modified_65859 00:09:54.501 19:08:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65896 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65896 ']' 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65896 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65896 00:09:54.501 killing process with pid 65896 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65896' 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65896 00:09:54.501 19:08:03 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65896 00:09:55.876 RPC TIMEOUT SETTING TEST PASSED. 00:09:55.876 19:08:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:55.876 ************************************ 00:09:55.876 END TEST nvme_rpc_timeouts 00:09:55.876 ************************************ 00:09:55.876 00:09:55.876 real 0m3.457s 00:09:55.876 user 0m6.670s 00:09:55.876 sys 0m0.544s 00:09:55.877 19:08:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.877 19:08:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:55.877 19:08:05 -- spdk/autotest.sh@239 -- # uname -s 00:09:55.877 19:08:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:55.877 19:08:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:55.877 19:08:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.877 19:08:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.877 19:08:05 -- common/autotest_common.sh@10 -- # set +x 00:09:55.877 ************************************ 00:09:55.877 START TEST sw_hotplug 00:09:55.877 ************************************ 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:55.877 * Looking for test storage... 00:09:55.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.877 19:08:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.877 --rc genhtml_branch_coverage=1 00:09:55.877 --rc genhtml_function_coverage=1 00:09:55.877 --rc genhtml_legend=1 00:09:55.877 --rc geninfo_all_blocks=1 00:09:55.877 --rc geninfo_unexecuted_blocks=1 00:09:55.877 00:09:55.877 ' 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.877 --rc genhtml_branch_coverage=1 00:09:55.877 --rc genhtml_function_coverage=1 00:09:55.877 --rc genhtml_legend=1 00:09:55.877 --rc geninfo_all_blocks=1 00:09:55.877 --rc geninfo_unexecuted_blocks=1 00:09:55.877 00:09:55.877 ' 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.877 --rc genhtml_branch_coverage=1 00:09:55.877 --rc genhtml_function_coverage=1 00:09:55.877 --rc genhtml_legend=1 00:09:55.877 --rc geninfo_all_blocks=1 00:09:55.877 --rc geninfo_unexecuted_blocks=1 00:09:55.877 00:09:55.877 ' 00:09:55.877 19:08:05 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.877 --rc genhtml_branch_coverage=1 00:09:55.877 --rc genhtml_function_coverage=1 00:09:55.877 --rc genhtml_legend=1 00:09:55.877 --rc geninfo_all_blocks=1 00:09:55.877 --rc geninfo_unexecuted_blocks=1 00:09:55.877 00:09:55.877 ' 00:09:55.877 19:08:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.444 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:56.444 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:56.444 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:56.444 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:56.444 19:08:05 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:56.444 19:08:05 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:56.444 19:08:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:56.444 19:08:05 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:56.444 19:08:05 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:56.444 19:08:06 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:56.444 19:08:06 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:56.444 19:08:06 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:56.444 19:08:06 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:56.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.962 Waiting for block devices as requested 00:09:56.962 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.962 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.220 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.220 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.527 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:02.527 19:08:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:02.527 19:08:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:02.785 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:02.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.786 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:03.044 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:03.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.300 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66752 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:03.300 19:08:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:03.300 19:08:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:03.557 Initializing NVMe Controllers 00:10:03.557 Attaching to 0000:00:10.0 00:10:03.557 Attaching to 0000:00:11.0 00:10:03.557 Attached to 0000:00:10.0 00:10:03.557 Attached to 0000:00:11.0 00:10:03.557 Initialization complete. Starting I/O... 00:10:03.557 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:03.557 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:03.557 00:10:04.490 QEMU NVMe Ctrl (12340 ): 2538 I/Os completed (+2538) 00:10:04.491 QEMU NVMe Ctrl (12341 ): 2534 I/Os completed (+2534) 00:10:04.491 00:10:05.424 QEMU NVMe Ctrl (12340 ): 5663 I/Os completed (+3125) 00:10:05.424 QEMU NVMe Ctrl (12341 ): 5634 I/Os completed (+3100) 00:10:05.424 00:10:06.798 QEMU NVMe Ctrl (12340 ): 8768 I/Os completed (+3105) 00:10:06.798 QEMU NVMe Ctrl (12341 ): 8822 I/Os completed (+3188) 00:10:06.798 00:10:07.731 QEMU NVMe Ctrl (12340 ): 12464 I/Os completed (+3696) 00:10:07.731 QEMU NVMe Ctrl (12341 ): 12515 I/Os completed (+3693) 00:10:07.731 00:10:08.678 QEMU NVMe Ctrl (12340 ): 16271 I/Os completed (+3807) 00:10:08.678 QEMU NVMe Ctrl (12341 ): 16326 I/Os completed (+3811) 00:10:08.678 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.245 [2024-11-27 19:08:18.845791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:09.245 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.245 [2024-11-27 19:08:18.846859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.846906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.846922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.846938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.245 [2024-11-27 19:08:18.848564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.848604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.848616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.848628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:09.245 EAL: Scan for (pci) bus failed. 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.245 [2024-11-27 19:08:18.864057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:09.245 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.245 [2024-11-27 19:08:18.864965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.865000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.865021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.865035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.245 [2024-11-27 19:08:18.866499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.866529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.866543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 [2024-11-27 19:08:18.866557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.245 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.504 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.504 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.504 19:08:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.504 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.504 Attaching to 0000:00:10.0 00:10:09.504 Attached to 0000:00:10.0 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.504 19:08:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.504 Attaching to 0000:00:11.0 00:10:09.504 Attached to 0000:00:11.0 00:10:10.438 QEMU NVMe Ctrl (12340 ): 3704 I/Os completed (+3704) 00:10:10.438 QEMU NVMe Ctrl (12341 ): 3412 I/Os completed (+3412) 00:10:10.438 00:10:11.813 QEMU NVMe Ctrl (12340 ): 7540 I/Os completed (+3836) 00:10:11.813 QEMU NVMe Ctrl (12341 ): 7248 I/Os completed (+3836) 00:10:11.813 00:10:12.748 QEMU NVMe Ctrl (12340 ): 11188 I/Os completed (+3648) 00:10:12.748 QEMU NVMe Ctrl (12341 ): 10895 I/Os completed (+3647) 00:10:12.748 00:10:13.682 QEMU NVMe Ctrl (12340 ): 14280 I/Os completed (+3092) 00:10:13.682 QEMU NVMe Ctrl (12341 ): 13987 I/Os completed (+3092) 00:10:13.682 00:10:14.616 QEMU NVMe Ctrl (12340 ): 17767 I/Os completed (+3487) 00:10:14.616 QEMU NVMe Ctrl (12341 ): 17410 I/Os completed (+3423) 00:10:14.616 00:10:15.549 QEMU NVMe Ctrl (12340 ): 21451 I/Os completed (+3684) 00:10:15.549 QEMU NVMe Ctrl (12341 ): 21052 I/Os completed (+3642) 00:10:15.549 00:10:16.482 QEMU NVMe Ctrl (12340 ): 25140 I/Os completed (+3689) 00:10:16.482 QEMU NVMe Ctrl (12341 ): 24749 I/Os completed (+3697) 00:10:16.482 00:10:17.416 QEMU NVMe Ctrl (12340 ): 28816 I/Os completed (+3676) 00:10:17.416 QEMU NVMe Ctrl (12341 ): 28407 I/Os completed (+3658) 00:10:17.416 00:10:18.788 QEMU NVMe Ctrl (12340 ): 32567 I/Os completed (+3751) 00:10:18.788 QEMU NVMe Ctrl (12341 ): 32095 I/Os completed (+3688) 00:10:18.788 00:10:19.723 QEMU NVMe Ctrl (12340 ): 36212 I/Os completed (+3645) 00:10:19.723 QEMU NVMe Ctrl (12341 ): 35666 I/Os completed (+3571) 00:10:19.723 00:10:20.657 QEMU NVMe Ctrl (12340 ): 39947 I/Os completed (+3735) 00:10:20.657 QEMU NVMe Ctrl (12341 ): 39265 I/Os completed (+3599) 00:10:20.657 00:10:21.601 QEMU NVMe Ctrl (12340 ): 43158 I/Os completed (+3211) 00:10:21.601 QEMU NVMe Ctrl (12341 ): 42547 I/Os completed (+3282) 00:10:21.601 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.601 [2024-11-27 19:08:31.134788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:21.601 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:21.601 [2024-11-27 19:08:31.138470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.138598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.138652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.138702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:21.601 [2024-11-27 19:08:31.143212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.143268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.143284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.143299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.601 [2024-11-27 19:08:31.156542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:21.601 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:21.601 [2024-11-27 19:08:31.157611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.157651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.157674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.157691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:21.601 [2024-11-27 19:08:31.159408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.159445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.159461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 [2024-11-27 19:08:31.159476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.601 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:21.601 EAL: Scan for (pci) bus failed. 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:21.601 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:21.861 Attaching to 0000:00:10.0 00:10:21.861 Attached to 0000:00:10.0 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.861 19:08:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:21.861 Attaching to 0000:00:11.0 00:10:21.861 Attached to 0000:00:11.0 00:10:22.433 QEMU NVMe Ctrl (12340 ): 2487 I/Os completed (+2487) 00:10:22.433 QEMU NVMe Ctrl (12341 ): 2240 I/Os completed (+2240) 00:10:22.433 00:10:23.817 QEMU NVMe Ctrl (12340 ): 5968 I/Os completed (+3481) 00:10:23.817 QEMU NVMe Ctrl (12341 ): 5890 I/Os completed (+3650) 00:10:23.817 00:10:24.753 QEMU NVMe Ctrl (12340 ): 9307 I/Os completed (+3339) 00:10:24.753 QEMU NVMe Ctrl (12341 ): 9301 I/Os completed (+3411) 00:10:24.753 00:10:25.685 QEMU NVMe Ctrl (12340 ): 12783 I/Os completed (+3476) 00:10:25.685 QEMU NVMe Ctrl (12341 ): 12970 I/Os completed (+3669) 00:10:25.685 00:10:26.618 QEMU NVMe Ctrl (12340 ): 16258 I/Os completed (+3475) 00:10:26.618 QEMU NVMe Ctrl (12341 ): 16516 I/Os completed (+3546) 00:10:26.618 00:10:27.552 QEMU NVMe Ctrl (12340 ): 19796 I/Os completed (+3538) 00:10:27.552 QEMU NVMe Ctrl (12341 ): 19995 I/Os completed (+3479) 00:10:27.552 00:10:28.487 QEMU NVMe Ctrl (12340 ): 23318 I/Os completed (+3522) 00:10:28.487 QEMU NVMe Ctrl (12341 ): 23506 I/Os completed (+3511) 00:10:28.487 00:10:29.457 QEMU NVMe Ctrl (12340 ): 26526 I/Os completed (+3208) 00:10:29.457 QEMU NVMe Ctrl (12341 ): 26629 I/Os completed (+3123) 00:10:29.457 00:10:30.828 QEMU NVMe Ctrl (12340 ): 30157 I/Os completed (+3631) 00:10:30.828 QEMU NVMe Ctrl (12341 ): 30294 I/Os completed (+3665) 00:10:30.828 00:10:31.763 QEMU NVMe Ctrl (12340 ): 33673 I/Os completed (+3516) 00:10:31.763 QEMU NVMe Ctrl (12341 ): 33839 I/Os completed (+3545) 00:10:31.763 00:10:32.704 QEMU NVMe Ctrl (12340 ): 36987 I/Os completed (+3314) 00:10:32.704 QEMU NVMe Ctrl (12341 ): 37158 I/Os completed (+3319) 00:10:32.704 00:10:33.638 QEMU NVMe Ctrl (12340 ): 40568 I/Os completed (+3581) 00:10:33.638 QEMU NVMe Ctrl (12341 ): 40771 I/Os completed (+3613) 00:10:33.638 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.897 [2024-11-27 19:08:43.403195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.897 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:33.897 [2024-11-27 19:08:43.404189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.404232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.404248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.404265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:33.897 [2024-11-27 19:08:43.405947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.405991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.406005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.406017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:10:33.897 EAL: Scan for (pci) bus failed. 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.897 [2024-11-27 19:08:43.423903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:33.897 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:33.897 [2024-11-27 19:08:43.424791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.424827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.424844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.424858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:33.897 [2024-11-27 19:08:43.426272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.426305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.426320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 [2024-11-27 19:08:43.426331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.897 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:33.897 EAL: Scan for (pci) bus failed. 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.897 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.157 Attaching to 0000:00:10.0 00:10:34.157 Attached to 0000:00:10.0 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.157 19:08:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.157 Attaching to 0000:00:11.0 00:10:34.157 Attached to 0000:00:11.0 00:10:34.157 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:34.157 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:34.157 [2024-11-27 19:08:43.668350] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:46.384 19:08:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:46.384 19:08:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.384 19:08:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.82 00:10:46.384 19:08:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.82 00:10:46.384 19:08:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:46.384 19:08:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:10:46.384 19:08:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:10:46.384 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 19:08:55 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66752 00:10:52.952 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66752) - No such process 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66752 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67301 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67301 00:10:52.952 19:09:01 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67301 ']' 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.952 19:09:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.952 [2024-11-27 19:09:01.750250] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:52.952 [2024-11-27 19:09:01.750371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67301 ] 00:10:52.952 [2024-11-27 19:09:01.906433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.952 [2024-11-27 19:09:02.012583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:53.210 19:09:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:53.210 19:09:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.773 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.774 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.774 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.774 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.774 19:09:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.774 19:09:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.774 19:09:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.774 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:59.774 19:09:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:59.774 [2024-11-27 19:09:08.756021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:59.774 [2024-11-27 19:09:08.757343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:08.757381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:08.757396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:08.757416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:08.757424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:08.757432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:08.757439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:08.757448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:08.757454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:08.757465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:08.757472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:08.757480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:09.156028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:59.774 [2024-11-27 19:09:09.157301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:09.157333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:09.157345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:09.157362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:09.157371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:09.157377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:09.157387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:09.157393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:09.157401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 [2024-11-27 19:09:09.157409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.774 [2024-11-27 19:09:09.157417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.774 [2024-11-27 19:09:09.157424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.774 19:09:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.774 19:09:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.774 19:09:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.774 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.032 19:09:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.237 19:09:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:12.237 19:09:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.237 [2024-11-27 19:09:21.656214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:12.237 [2024-11-27 19:09:21.657459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.237 [2024-11-27 19:09:21.657491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.237 [2024-11-27 19:09:21.657503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.237 [2024-11-27 19:09:21.657524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.237 [2024-11-27 19:09:21.657532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.237 [2024-11-27 19:09:21.657541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.237 [2024-11-27 19:09:21.657548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.237 [2024-11-27 19:09:21.657556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.237 [2024-11-27 19:09:21.657562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.237 [2024-11-27 19:09:21.657571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.237 [2024-11-27 19:09:21.657578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.237 [2024-11-27 19:09:21.657586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.855 19:09:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.855 19:09:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.855 [2024-11-27 19:09:22.156206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:12.855 [2024-11-27 19:09:22.157485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.855 [2024-11-27 19:09:22.157512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.855 [2024-11-27 19:09:22.157525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.855 [2024-11-27 19:09:22.157538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.855 [2024-11-27 19:09:22.157547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.855 [2024-11-27 19:09:22.157554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.855 [2024-11-27 19:09:22.157563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.855 [2024-11-27 19:09:22.157570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.855 [2024-11-27 19:09:22.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.855 [2024-11-27 19:09:22.157585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.855 [2024-11-27 19:09:22.157593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.855 [2024-11-27 19:09:22.157600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.855 19:09:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.855 19:09:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.070 [2024-11-27 19:09:34.556425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:25.070 [2024-11-27 19:09:34.557700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.070 [2024-11-27 19:09:34.557732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.070 [2024-11-27 19:09:34.557744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.070 [2024-11-27 19:09:34.557765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.070 [2024-11-27 19:09:34.557773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.070 [2024-11-27 19:09:34.557783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.070 [2024-11-27 19:09:34.557791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.070 [2024-11-27 19:09:34.557800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.070 [2024-11-27 19:09:34.557806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.070 [2024-11-27 19:09:34.557815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.070 [2024-11-27 19:09:34.557821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.070 [2024-11-27 19:09:34.557830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.070 19:09:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:25.070 19:09:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:25.329 [2024-11-27 19:09:34.956422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:25.329 [2024-11-27 19:09:34.957594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.329 [2024-11-27 19:09:34.957619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.329 [2024-11-27 19:09:34.957631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.329 [2024-11-27 19:09:34.957644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.329 [2024-11-27 19:09:34.957653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.329 [2024-11-27 19:09:34.957660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.329 [2024-11-27 19:09:34.957670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.329 [2024-11-27 19:09:34.957676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.329 [2024-11-27 19:09:34.957686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.329 [2024-11-27 19:09:34.957694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.329 [2024-11-27 19:09:34.957701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.329 [2024-11-27 19:09:34.957708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.587 19:09:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.587 19:09:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.587 19:09:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:25.587 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.845 19:09:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.77 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.77 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.77 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.77 2 00:11:38.044 remove_attach_helper took 44.77s to complete (handling 2 nvme drive(s)) 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:38.044 19:09:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:38.044 19:09:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.607 19:09:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.607 19:09:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.607 19:09:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:44.607 19:09:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.607 [2024-11-27 19:09:53.556495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:44.607 [2024-11-27 19:09:53.557571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.557671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.557767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.557786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.557813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.557837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.557855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.557916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.558094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.558113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.558157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.956484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:44.607 [2024-11-27 19:09:53.957718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.957815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.957876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.957907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.957941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.957967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.957993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.958011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.958071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 [2024-11-27 19:09:53.958133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.607 [2024-11-27 19:09:53.958306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.607 [2024-11-27 19:09:53.958338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.607 19:09:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.607 19:09:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.607 19:09:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.607 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.866 19:09:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.161 [2024-11-27 19:10:06.457088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.161 [2024-11-27 19:10:06.458090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.161 [2024-11-27 19:10:06.458138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.161 [2024-11-27 19:10:06.458150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.161 [2024-11-27 19:10:06.458172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.161 [2024-11-27 19:10:06.458180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.161 [2024-11-27 19:10:06.458189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.161 [2024-11-27 19:10:06.458196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.161 [2024-11-27 19:10:06.458205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.161 [2024-11-27 19:10:06.458211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.161 [2024-11-27 19:10:06.458220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.161 [2024-11-27 19:10:06.458228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.161 [2024-11-27 19:10:06.458236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.161 19:10:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:57.161 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.420 [2024-11-27 19:10:06.857085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.420 [2024-11-27 19:10:06.858035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.420 [2024-11-27 19:10:06.858064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.420 [2024-11-27 19:10:06.858075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.420 [2024-11-27 19:10:06.858086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.420 [2024-11-27 19:10:06.858097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.420 [2024-11-27 19:10:06.858104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.420 [2024-11-27 19:10:06.858113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.420 [2024-11-27 19:10:06.858120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.420 [2024-11-27 19:10:06.858139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.420 [2024-11-27 19:10:06.858147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.420 [2024-11-27 19:10:06.858155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.420 [2024-11-27 19:10:06.858162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.420 19:10:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.420 19:10:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.420 19:10:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.420 19:10:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.420 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.420 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.678 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.679 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.679 19:10:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.876 19:10:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:09.876 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:09.876 [2024-11-27 19:10:19.357307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:09.876 [2024-11-27 19:10:19.358577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.876 [2024-11-27 19:10:19.358677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.876 [2024-11-27 19:10:19.358733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.876 [2024-11-27 19:10:19.358771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.876 [2024-11-27 19:10:19.358789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.876 [2024-11-27 19:10:19.358816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.876 [2024-11-27 19:10:19.358840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.876 [2024-11-27 19:10:19.358861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.877 [2024-11-27 19:10:19.358926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.877 [2024-11-27 19:10:19.358955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.877 [2024-11-27 19:10:19.358971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.877 [2024-11-27 19:10:19.358996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.442 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:10.442 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.442 [2024-11-27 19:10:19.857306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.442 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.442 [2024-11-27 19:10:19.858590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.442 [2024-11-27 19:10:19.858686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.442 [2024-11-27 19:10:19.858746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.442 [2024-11-27 19:10:19.858775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.442 [2024-11-27 19:10:19.858794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.442 [2024-11-27 19:10:19.858818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.442 [2024-11-27 19:10:19.858843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.442 [2024-11-27 19:10:19.858860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.443 [2024-11-27 19:10:19.858912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.443 [2024-11-27 19:10:19.859102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.443 [2024-11-27 19:10:19.859143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.443 [2024-11-27 19:10:19.859171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.443 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.443 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.443 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.443 19:10:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.443 19:10:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.443 19:10:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.443 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:10.443 19:10:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:10.443 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.443 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.443 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.701 19:10:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.77 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.77 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.77 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.77 2 00:12:22.916 remove_attach_helper took 44.77s to complete (handling 2 nvme drive(s)) 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:22.916 19:10:32 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67301 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67301 ']' 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67301 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67301 00:12:22.916 killing process with pid 67301 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67301' 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67301 00:12:22.916 19:10:32 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67301 00:12:24.291 19:10:33 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:24.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.859 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:24.859 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:24.859 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:24.859 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.119 ************************************ 00:12:25.119 END TEST sw_hotplug 00:12:25.119 ************************************ 00:12:25.119 00:12:25.119 real 2m29.166s 00:12:25.119 user 1m51.412s 00:12:25.119 sys 0m16.306s 00:12:25.119 19:10:34 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.119 19:10:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.119 19:10:34 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:25.119 19:10:34 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:25.119 19:10:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.119 19:10:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.119 19:10:34 -- common/autotest_common.sh@10 -- # set +x 00:12:25.119 ************************************ 00:12:25.119 START TEST nvme_xnvme 00:12:25.119 ************************************ 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:25.119 * Looking for test storage... 00:12:25.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.119 19:10:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.119 --rc genhtml_branch_coverage=1 00:12:25.119 --rc genhtml_function_coverage=1 00:12:25.119 --rc genhtml_legend=1 00:12:25.119 --rc geninfo_all_blocks=1 00:12:25.119 --rc geninfo_unexecuted_blocks=1 00:12:25.119 00:12:25.119 ' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.119 --rc genhtml_branch_coverage=1 00:12:25.119 --rc genhtml_function_coverage=1 00:12:25.119 --rc genhtml_legend=1 00:12:25.119 --rc geninfo_all_blocks=1 00:12:25.119 --rc geninfo_unexecuted_blocks=1 00:12:25.119 00:12:25.119 ' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.119 --rc genhtml_branch_coverage=1 00:12:25.119 --rc genhtml_function_coverage=1 00:12:25.119 --rc genhtml_legend=1 00:12:25.119 --rc geninfo_all_blocks=1 00:12:25.119 --rc geninfo_unexecuted_blocks=1 00:12:25.119 00:12:25.119 ' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.119 --rc genhtml_branch_coverage=1 00:12:25.119 --rc genhtml_function_coverage=1 00:12:25.119 --rc genhtml_legend=1 00:12:25.119 --rc geninfo_all_blocks=1 00:12:25.119 --rc geninfo_unexecuted_blocks=1 00:12:25.119 00:12:25.119 ' 00:12:25.119 19:10:34 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:25.119 19:10:34 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:25.119 19:10:34 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:25.119 19:10:34 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:25.120 19:10:34 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:25.120 19:10:34 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:25.120 19:10:34 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:25.383 #define SPDK_CONFIG_H 00:12:25.383 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:25.383 #define SPDK_CONFIG_APPS 1 00:12:25.383 #define SPDK_CONFIG_ARCH native 00:12:25.383 #define SPDK_CONFIG_ASAN 1 00:12:25.383 #undef SPDK_CONFIG_AVAHI 00:12:25.383 #undef SPDK_CONFIG_CET 00:12:25.383 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:25.383 #define SPDK_CONFIG_COVERAGE 1 00:12:25.383 #define SPDK_CONFIG_CROSS_PREFIX 00:12:25.383 #undef SPDK_CONFIG_CRYPTO 00:12:25.383 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:25.383 #undef SPDK_CONFIG_CUSTOMOCF 00:12:25.383 #undef SPDK_CONFIG_DAOS 00:12:25.383 #define SPDK_CONFIG_DAOS_DIR 00:12:25.383 #define SPDK_CONFIG_DEBUG 1 00:12:25.383 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:25.383 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:25.383 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:25.383 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:25.383 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:25.383 #undef SPDK_CONFIG_DPDK_UADK 00:12:25.383 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:25.383 #define SPDK_CONFIG_EXAMPLES 1 00:12:25.383 #undef SPDK_CONFIG_FC 00:12:25.383 #define SPDK_CONFIG_FC_PATH 00:12:25.383 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:25.383 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:25.383 #define SPDK_CONFIG_FSDEV 1 00:12:25.383 #undef SPDK_CONFIG_FUSE 00:12:25.383 #undef SPDK_CONFIG_FUZZER 00:12:25.383 #define SPDK_CONFIG_FUZZER_LIB 00:12:25.383 #undef SPDK_CONFIG_GOLANG 00:12:25.383 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:25.383 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:25.383 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:25.383 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:25.383 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:25.383 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:25.383 #undef SPDK_CONFIG_HAVE_LZ4 00:12:25.383 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:25.383 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:25.383 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:25.383 #define SPDK_CONFIG_IDXD 1 00:12:25.383 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:25.383 #undef SPDK_CONFIG_IPSEC_MB 00:12:25.383 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:25.383 #define SPDK_CONFIG_ISAL 1 00:12:25.383 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:25.383 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:25.383 #define SPDK_CONFIG_LIBDIR 00:12:25.383 #undef SPDK_CONFIG_LTO 00:12:25.383 #define SPDK_CONFIG_MAX_LCORES 128 00:12:25.383 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:25.383 #define SPDK_CONFIG_NVME_CUSE 1 00:12:25.383 #undef SPDK_CONFIG_OCF 00:12:25.383 #define SPDK_CONFIG_OCF_PATH 00:12:25.383 #define SPDK_CONFIG_OPENSSL_PATH 00:12:25.383 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:25.383 #define SPDK_CONFIG_PGO_DIR 00:12:25.383 #undef SPDK_CONFIG_PGO_USE 00:12:25.383 #define SPDK_CONFIG_PREFIX /usr/local 00:12:25.383 #undef SPDK_CONFIG_RAID5F 00:12:25.383 #undef SPDK_CONFIG_RBD 00:12:25.383 #define SPDK_CONFIG_RDMA 1 00:12:25.383 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:25.383 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:25.383 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:25.383 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:25.383 #define SPDK_CONFIG_SHARED 1 00:12:25.383 #undef SPDK_CONFIG_SMA 00:12:25.383 #define SPDK_CONFIG_TESTS 1 00:12:25.383 #undef SPDK_CONFIG_TSAN 00:12:25.383 #define SPDK_CONFIG_UBLK 1 00:12:25.383 #define SPDK_CONFIG_UBSAN 1 00:12:25.383 #undef SPDK_CONFIG_UNIT_TESTS 00:12:25.383 #undef SPDK_CONFIG_URING 00:12:25.383 #define SPDK_CONFIG_URING_PATH 00:12:25.383 #undef SPDK_CONFIG_URING_ZNS 00:12:25.383 #undef SPDK_CONFIG_USDT 00:12:25.383 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:25.383 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:25.383 #undef SPDK_CONFIG_VFIO_USER 00:12:25.383 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:25.383 #define SPDK_CONFIG_VHOST 1 00:12:25.383 #define SPDK_CONFIG_VIRTIO 1 00:12:25.383 #undef SPDK_CONFIG_VTUNE 00:12:25.383 #define SPDK_CONFIG_VTUNE_DIR 00:12:25.383 #define SPDK_CONFIG_WERROR 1 00:12:25.383 #define SPDK_CONFIG_WPDK_DIR 00:12:25.383 #define SPDK_CONFIG_XNVME 1 00:12:25.383 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:25.383 19:10:34 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:25.383 19:10:34 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.383 19:10:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.383 19:10:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.383 19:10:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.383 19:10:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.383 19:10:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.383 19:10:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.383 19:10:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.383 19:10:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:25.383 19:10:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.383 19:10:34 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:25.383 19:10:34 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:25.384 19:10:34 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:25.384 19:10:34 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:25.384 19:10:34 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:25.384 19:10:34 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68639 ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68639 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.KBwNms 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.KBwNms/tests/xnvme /tmp/spdk.KBwNms 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13981134848 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5587243008 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13981134848 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5587243008 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:25.385 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98088366080 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1614413824 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:25.386 * Looking for test storage... 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13981134848 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.386 --rc genhtml_branch_coverage=1 00:12:25.386 --rc genhtml_function_coverage=1 00:12:25.386 --rc genhtml_legend=1 00:12:25.386 --rc geninfo_all_blocks=1 00:12:25.386 --rc geninfo_unexecuted_blocks=1 00:12:25.386 00:12:25.386 ' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.386 --rc genhtml_branch_coverage=1 00:12:25.386 --rc genhtml_function_coverage=1 00:12:25.386 --rc genhtml_legend=1 00:12:25.386 --rc geninfo_all_blocks=1 00:12:25.386 --rc geninfo_unexecuted_blocks=1 00:12:25.386 00:12:25.386 ' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.386 --rc genhtml_branch_coverage=1 00:12:25.386 --rc genhtml_function_coverage=1 00:12:25.386 --rc genhtml_legend=1 00:12:25.386 --rc geninfo_all_blocks=1 00:12:25.386 --rc geninfo_unexecuted_blocks=1 00:12:25.386 00:12:25.386 ' 00:12:25.386 19:10:34 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.386 --rc genhtml_branch_coverage=1 00:12:25.386 --rc genhtml_function_coverage=1 00:12:25.386 --rc genhtml_legend=1 00:12:25.386 --rc geninfo_all_blocks=1 00:12:25.386 --rc geninfo_unexecuted_blocks=1 00:12:25.386 00:12:25.386 ' 00:12:25.386 19:10:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.386 19:10:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.386 19:10:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.386 19:10:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.386 19:10:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.386 19:10:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:25.386 19:10:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.386 19:10:34 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:25.387 19:10:34 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:25.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.907 Waiting for block devices as requested 00:12:25.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:26.168 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:26.168 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:31.461 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:31.461 19:10:40 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:31.721 19:10:41 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:31.721 19:10:41 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:31.983 19:10:41 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:31.983 No valid GPT data, bailing 00:12:31.983 19:10:41 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:31.983 19:10:41 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:31.983 19:10:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:31.983 19:10:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.983 19:10:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.983 19:10:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.983 ************************************ 00:12:31.983 START TEST xnvme_rpc 00:12:31.983 ************************************ 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69031 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69031 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69031 ']' 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.983 19:10:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:31.983 [2024-11-27 19:10:41.568346] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:31.983 [2024-11-27 19:10:41.568710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:12:32.244 [2024-11-27 19:10:41.736806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.521 [2024-11-27 19:10:41.882175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.099 xnvme_bdev 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.099 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.361 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69031 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69031 ']' 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69031 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69031 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.362 killing process with pid 69031 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69031' 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69031 00:12:33.362 19:10:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69031 00:12:35.279 00:12:35.279 real 0m3.179s 00:12:35.279 user 0m3.041s 00:12:35.279 sys 0m0.588s 00:12:35.279 19:10:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.279 19:10:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 ************************************ 00:12:35.279 END TEST xnvme_rpc 00:12:35.279 ************************************ 00:12:35.279 19:10:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:35.279 19:10:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.279 19:10:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.279 19:10:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 ************************************ 00:12:35.279 START TEST xnvme_bdevperf 00:12:35.279 ************************************ 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.279 19:10:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 { 00:12:35.279 "subsystems": [ 00:12:35.279 { 00:12:35.279 "subsystem": "bdev", 00:12:35.279 "config": [ 00:12:35.279 { 00:12:35.279 "params": { 00:12:35.279 "io_mechanism": "libaio", 00:12:35.279 "conserve_cpu": false, 00:12:35.279 "filename": "/dev/nvme0n1", 00:12:35.279 "name": "xnvme_bdev" 00:12:35.279 }, 00:12:35.279 "method": "bdev_xnvme_create" 00:12:35.279 }, 00:12:35.279 { 00:12:35.279 "method": "bdev_wait_for_examine" 00:12:35.279 } 00:12:35.279 ] 00:12:35.279 } 00:12:35.279 ] 00:12:35.279 } 00:12:35.279 [2024-11-27 19:10:44.796056] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:35.279 [2024-11-27 19:10:44.796238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69104 ] 00:12:35.541 [2024-11-27 19:10:44.964686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.541 [2024-11-27 19:10:45.100920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.113 Running I/O for 5 seconds... 00:12:38.002 23838.00 IOPS, 93.12 MiB/s [2024-11-27T19:10:48.580Z] 24718.50 IOPS, 96.56 MiB/s [2024-11-27T19:10:49.524Z] 24897.00 IOPS, 97.25 MiB/s [2024-11-27T19:10:50.468Z] 26223.50 IOPS, 102.44 MiB/s 00:12:40.833 Latency(us) 00:12:40.833 [2024-11-27T19:10:50.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.833 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:40.833 xnvme_bdev : 5.00 28585.51 111.66 0.00 0.00 2234.03 165.42 9880.81 00:12:40.833 [2024-11-27T19:10:50.468Z] =================================================================================================================== 00:12:40.833 [2024-11-27T19:10:50.468Z] Total : 28585.51 111.66 0.00 0.00 2234.03 165.42 9880.81 00:12:41.777 19:10:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:41.777 19:10:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:41.777 19:10:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:41.777 19:10:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:41.777 19:10:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:41.777 { 00:12:41.777 "subsystems": [ 00:12:41.777 { 00:12:41.777 "subsystem": "bdev", 00:12:41.777 "config": [ 00:12:41.777 { 00:12:41.777 "params": { 00:12:41.777 "io_mechanism": "libaio", 00:12:41.777 "conserve_cpu": false, 00:12:41.777 "filename": "/dev/nvme0n1", 00:12:41.777 "name": "xnvme_bdev" 00:12:41.777 }, 00:12:41.777 "method": "bdev_xnvme_create" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "method": "bdev_wait_for_examine" 00:12:41.777 } 00:12:41.777 ] 00:12:41.777 } 00:12:41.777 ] 00:12:41.777 } 00:12:41.777 [2024-11-27 19:10:51.290215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:41.777 [2024-11-27 19:10:51.290330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69186 ] 00:12:42.038 [2024-11-27 19:10:51.449376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.038 [2024-11-27 19:10:51.561335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.300 Running I/O for 5 seconds... 00:12:44.631 38129.00 IOPS, 148.94 MiB/s [2024-11-27T19:10:55.211Z] 39155.50 IOPS, 152.95 MiB/s [2024-11-27T19:10:56.154Z] 39754.33 IOPS, 155.29 MiB/s [2024-11-27T19:10:57.097Z] 39201.25 IOPS, 153.13 MiB/s 00:12:47.462 Latency(us) 00:12:47.462 [2024-11-27T19:10:57.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.462 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:47.462 xnvme_bdev : 5.00 38698.78 151.17 0.00 0.00 1649.58 176.44 9225.45 00:12:47.462 [2024-11-27T19:10:57.097Z] =================================================================================================================== 00:12:47.462 [2024-11-27T19:10:57.097Z] Total : 38698.78 151.17 0.00 0.00 1649.58 176.44 9225.45 00:12:48.034 00:12:48.034 real 0m12.906s 00:12:48.034 user 0m4.869s 00:12:48.034 sys 0m6.683s 00:12:48.034 19:10:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.034 19:10:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.034 ************************************ 00:12:48.034 END TEST xnvme_bdevperf 00:12:48.034 ************************************ 00:12:48.034 19:10:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:48.034 19:10:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.034 19:10:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.034 19:10:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.034 ************************************ 00:12:48.034 START TEST xnvme_fio_plugin 00:12:48.034 ************************************ 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.034 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.295 19:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.295 { 00:12:48.295 "subsystems": [ 00:12:48.295 { 00:12:48.295 "subsystem": "bdev", 00:12:48.295 "config": [ 00:12:48.295 { 00:12:48.295 "params": { 00:12:48.295 "io_mechanism": "libaio", 00:12:48.295 "conserve_cpu": false, 00:12:48.295 "filename": "/dev/nvme0n1", 00:12:48.295 "name": "xnvme_bdev" 00:12:48.295 }, 00:12:48.295 "method": "bdev_xnvme_create" 00:12:48.295 }, 00:12:48.295 { 00:12:48.295 "method": "bdev_wait_for_examine" 00:12:48.295 } 00:12:48.295 ] 00:12:48.295 } 00:12:48.295 ] 00:12:48.295 } 00:12:48.295 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.295 fio-3.35 00:12:48.295 Starting 1 thread 00:12:54.897 00:12:54.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69301: Wed Nov 27 19:11:03 2024 00:12:54.897 read: IOPS=37.4k, BW=146MiB/s (153MB/s)(731MiB/5001msec) 00:12:54.897 slat (usec): min=4, max=4958, avg=20.95, stdev=82.15 00:12:54.897 clat (usec): min=31, max=9808, avg=1142.14, stdev=592.75 00:12:54.897 lat (usec): min=157, max=9813, avg=1163.08, stdev=589.53 00:12:54.897 clat percentiles (usec): 00:12:54.897 | 1.00th=[ 217], 5.00th=[ 343], 10.00th=[ 465], 20.00th=[ 627], 00:12:54.897 | 30.00th=[ 775], 40.00th=[ 914], 50.00th=[ 1057], 60.00th=[ 1221], 00:12:54.897 | 70.00th=[ 1401], 80.00th=[ 1614], 90.00th=[ 1893], 95.00th=[ 2147], 00:12:54.897 | 99.00th=[ 2900], 99.50th=[ 3326], 99.90th=[ 4080], 99.95th=[ 4359], 00:12:54.897 | 99.99th=[ 7767] 00:12:54.897 bw ( KiB/s): min=116048, max=224064, per=100.00%, avg=153268.44, stdev=38149.82, samples=9 00:12:54.897 iops : min=29012, max=56016, avg=38317.11, stdev=9537.46, samples=9 00:12:54.897 lat (usec) : 50=0.01%, 250=1.91%, 500=9.86%, 750=16.70%, 1000=17.48% 00:12:54.897 lat (msec) : 2=46.74%, 4=7.18%, 10=0.12% 00:12:54.897 cpu : usr=36.34%, sys=54.16%, ctx=11, majf=0, minf=764 00:12:54.897 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.3%, 16=23.5%, 32=62.3%, >=64=2.1% 00:12:54.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.897 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:54.897 issued rwts: total=187161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.897 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:54.897 00:12:54.897 Run status group 0 (all jobs): 00:12:54.897 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=731MiB (767MB), run=5001-5001msec 00:12:54.897 ----------------------------------------------------- 00:12:54.897 Suppressions used: 00:12:54.897 count bytes template 00:12:54.897 1 11 /usr/src/fio/parse.c 00:12:54.897 1 8 libtcmalloc_minimal.so 00:12:54.897 1 904 libcrypto.so 00:12:54.897 ----------------------------------------------------- 00:12:54.897 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:55.159 19:11:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.159 { 00:12:55.159 "subsystems": [ 00:12:55.159 { 00:12:55.159 "subsystem": "bdev", 00:12:55.159 "config": [ 00:12:55.159 { 00:12:55.159 "params": { 00:12:55.159 "io_mechanism": "libaio", 00:12:55.159 "conserve_cpu": false, 00:12:55.159 "filename": "/dev/nvme0n1", 00:12:55.159 "name": "xnvme_bdev" 00:12:55.159 }, 00:12:55.159 "method": "bdev_xnvme_create" 00:12:55.159 }, 00:12:55.159 { 00:12:55.159 "method": "bdev_wait_for_examine" 00:12:55.159 } 00:12:55.159 ] 00:12:55.159 } 00:12:55.159 ] 00:12:55.159 } 00:12:55.159 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:55.159 fio-3.35 00:12:55.159 Starting 1 thread 00:13:01.756 00:13:01.756 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69391: Wed Nov 27 19:11:10 2024 00:13:01.756 write: IOPS=35.5k, BW=139MiB/s (145MB/s)(694MiB/5001msec); 0 zone resets 00:13:01.756 slat (usec): min=3, max=1672, avg=23.04, stdev=70.59 00:13:01.756 clat (usec): min=16, max=9735, avg=1157.70, stdev=595.11 00:13:01.756 lat (usec): min=71, max=9740, avg=1180.73, stdev=592.58 00:13:01.756 clat percentiles (usec): 00:13:01.756 | 1.00th=[ 235], 5.00th=[ 367], 10.00th=[ 486], 20.00th=[ 676], 00:13:01.756 | 30.00th=[ 832], 40.00th=[ 971], 50.00th=[ 1090], 60.00th=[ 1221], 00:13:01.756 | 70.00th=[ 1369], 80.00th=[ 1549], 90.00th=[ 1795], 95.00th=[ 2114], 00:13:01.756 | 99.00th=[ 3261], 99.50th=[ 3687], 99.90th=[ 4424], 99.95th=[ 5669], 00:13:01.756 | 99.99th=[ 8586] 00:13:01.756 bw ( KiB/s): min=132680, max=164584, per=100.00%, avg=142207.11, stdev=9635.33, samples=9 00:13:01.756 iops : min=33170, max=41146, avg=35551.78, stdev=2408.83, samples=9 00:13:01.756 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=1.24%, 500=9.37% 00:13:01.756 lat (usec) : 750=13.70%, 1000=18.15% 00:13:01.756 lat (msec) : 2=51.15%, 4=6.12%, 10=0.26% 00:13:01.756 cpu : usr=30.68%, sys=55.98%, ctx=14, majf=0, minf=765 00:13:01.756 IO depths : 1=0.3%, 2=0.9%, 4=3.1%, 8=9.4%, 16=24.5%, 32=59.9%, >=64=2.0% 00:13:01.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.756 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:01.756 issued rwts: total=0,177636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:01.756 00:13:01.756 Run status group 0 (all jobs): 00:13:01.756 WRITE: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=694MiB (728MB), run=5001-5001msec 00:13:02.017 ----------------------------------------------------- 00:13:02.017 Suppressions used: 00:13:02.017 count bytes template 00:13:02.017 1 11 /usr/src/fio/parse.c 00:13:02.017 1 8 libtcmalloc_minimal.so 00:13:02.017 1 904 libcrypto.so 00:13:02.017 ----------------------------------------------------- 00:13:02.017 00:13:02.017 00:13:02.017 real 0m13.836s 00:13:02.017 user 0m6.161s 00:13:02.017 sys 0m6.128s 00:13:02.017 19:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.017 ************************************ 00:13:02.017 END TEST xnvme_fio_plugin 00:13:02.017 ************************************ 00:13:02.017 19:11:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:02.017 19:11:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:02.017 19:11:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:02.017 19:11:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:02.017 19:11:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:02.017 19:11:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.017 19:11:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.017 19:11:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.017 ************************************ 00:13:02.017 START TEST xnvme_rpc 00:13:02.017 ************************************ 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69480 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69480 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69480 ']' 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.017 19:11:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:02.278 [2024-11-27 19:11:11.681770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:02.278 [2024-11-27 19:11:11.681935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69480 ] 00:13:02.278 [2024-11-27 19:11:11.852318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.540 [2024-11-27 19:11:11.979459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 xnvme_bdev 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69480 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69480 ']' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69480 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69480 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.375 killing process with pid 69480 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69480' 00:13:03.375 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69480 00:13:03.376 19:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69480 00:13:05.291 00:13:05.291 real 0m2.988s 00:13:05.291 user 0m2.996s 00:13:05.291 sys 0m0.490s 00:13:05.291 19:11:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.291 ************************************ 00:13:05.291 END TEST xnvme_rpc 00:13:05.291 ************************************ 00:13:05.291 19:11:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.292 19:11:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:05.292 19:11:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:05.292 19:11:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.292 19:11:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.292 ************************************ 00:13:05.292 START TEST xnvme_bdevperf 00:13:05.292 ************************************ 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:05.292 19:11:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:05.292 { 00:13:05.292 "subsystems": [ 00:13:05.292 { 00:13:05.292 "subsystem": "bdev", 00:13:05.292 "config": [ 00:13:05.292 { 00:13:05.292 "params": { 00:13:05.292 "io_mechanism": "libaio", 00:13:05.292 "conserve_cpu": true, 00:13:05.292 "filename": "/dev/nvme0n1", 00:13:05.292 "name": "xnvme_bdev" 00:13:05.292 }, 00:13:05.292 "method": "bdev_xnvme_create" 00:13:05.292 }, 00:13:05.292 { 00:13:05.292 "method": "bdev_wait_for_examine" 00:13:05.292 } 00:13:05.292 ] 00:13:05.292 } 00:13:05.292 ] 00:13:05.292 } 00:13:05.292 [2024-11-27 19:11:14.717568] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:05.292 [2024-11-27 19:11:14.717719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69554 ] 00:13:05.292 [2024-11-27 19:11:14.883889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.552 [2024-11-27 19:11:15.014310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.814 Running I/O for 5 seconds... 00:13:07.715 31467.00 IOPS, 122.92 MiB/s [2024-11-27T19:11:18.741Z] 31563.00 IOPS, 123.29 MiB/s [2024-11-27T19:11:19.694Z] 32214.00 IOPS, 125.84 MiB/s [2024-11-27T19:11:20.639Z] 32204.25 IOPS, 125.80 MiB/s 00:13:11.004 Latency(us) 00:13:11.004 [2024-11-27T19:11:20.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.004 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:11.004 xnvme_bdev : 5.00 32275.26 126.08 0.00 0.00 1978.33 327.68 14821.22 00:13:11.004 [2024-11-27T19:11:20.639Z] =================================================================================================================== 00:13:11.004 [2024-11-27T19:11:20.639Z] Total : 32275.26 126.08 0.00 0.00 1978.33 327.68 14821.22 00:13:11.578 19:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.578 19:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:11.578 19:11:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:11.578 19:11:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:11.578 19:11:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.578 { 00:13:11.578 "subsystems": [ 00:13:11.578 { 00:13:11.578 "subsystem": "bdev", 00:13:11.578 "config": [ 00:13:11.578 { 00:13:11.578 "params": { 00:13:11.578 "io_mechanism": "libaio", 00:13:11.578 "conserve_cpu": true, 00:13:11.578 "filename": "/dev/nvme0n1", 00:13:11.578 "name": "xnvme_bdev" 00:13:11.578 }, 00:13:11.578 "method": "bdev_xnvme_create" 00:13:11.578 }, 00:13:11.578 { 00:13:11.578 "method": "bdev_wait_for_examine" 00:13:11.578 } 00:13:11.578 ] 00:13:11.578 } 00:13:11.578 ] 00:13:11.578 } 00:13:11.578 [2024-11-27 19:11:21.210443] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:11.578 [2024-11-27 19:11:21.210596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69629 ] 00:13:11.839 [2024-11-27 19:11:21.375819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.100 [2024-11-27 19:11:21.508384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.362 Running I/O for 5 seconds... 00:13:14.251 35921.00 IOPS, 140.32 MiB/s [2024-11-27T19:11:25.274Z] 36969.00 IOPS, 144.41 MiB/s [2024-11-27T19:11:25.847Z] 36083.67 IOPS, 140.95 MiB/s [2024-11-27T19:11:27.230Z] 35687.75 IOPS, 139.41 MiB/s 00:13:17.596 Latency(us) 00:13:17.596 [2024-11-27T19:11:27.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.596 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:17.596 xnvme_bdev : 5.00 35392.37 138.25 0.00 0.00 1803.86 261.51 7864.32 00:13:17.596 [2024-11-27T19:11:27.231Z] =================================================================================================================== 00:13:17.596 [2024-11-27T19:11:27.231Z] Total : 35392.37 138.25 0.00 0.00 1803.86 261.51 7864.32 00:13:18.164 00:13:18.164 real 0m13.017s 00:13:18.164 user 0m4.929s 00:13:18.164 sys 0m6.266s 00:13:18.164 19:11:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.164 19:11:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.164 ************************************ 00:13:18.164 END TEST xnvme_bdevperf 00:13:18.164 ************************************ 00:13:18.164 19:11:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:18.164 19:11:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:18.164 19:11:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.164 19:11:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.164 ************************************ 00:13:18.164 START TEST xnvme_fio_plugin 00:13:18.164 ************************************ 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:18.164 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.165 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.165 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:18.165 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:18.165 19:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:18.165 { 00:13:18.165 "subsystems": [ 00:13:18.165 { 00:13:18.165 "subsystem": "bdev", 00:13:18.165 "config": [ 00:13:18.165 { 00:13:18.165 "params": { 00:13:18.165 "io_mechanism": "libaio", 00:13:18.165 "conserve_cpu": true, 00:13:18.165 "filename": "/dev/nvme0n1", 00:13:18.165 "name": "xnvme_bdev" 00:13:18.165 }, 00:13:18.165 "method": "bdev_xnvme_create" 00:13:18.165 }, 00:13:18.165 { 00:13:18.165 "method": "bdev_wait_for_examine" 00:13:18.165 } 00:13:18.165 ] 00:13:18.165 } 00:13:18.165 ] 00:13:18.165 } 00:13:18.425 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:18.425 fio-3.35 00:13:18.425 Starting 1 thread 00:13:25.109 00:13:25.109 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69748: Wed Nov 27 19:11:33 2024 00:13:25.109 read: IOPS=32.8k, BW=128MiB/s (134MB/s)(640MiB/5001msec) 00:13:25.109 slat (usec): min=4, max=1654, avg=23.89, stdev=97.39 00:13:25.109 clat (usec): min=105, max=4693, avg=1315.30, stdev=518.31 00:13:25.109 lat (usec): min=203, max=4778, avg=1339.18, stdev=508.84 00:13:25.109 clat percentiles (usec): 00:13:25.109 | 1.00th=[ 269], 5.00th=[ 502], 10.00th=[ 660], 20.00th=[ 898], 00:13:25.109 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1418], 00:13:25.109 | 70.00th=[ 1532], 80.00th=[ 1680], 90.00th=[ 1926], 95.00th=[ 2180], 00:13:25.109 | 99.00th=[ 2900], 99.50th=[ 3195], 99.90th=[ 3752], 99.95th=[ 3949], 00:13:25.109 | 99.99th=[ 4228] 00:13:25.109 bw ( KiB/s): min=123376, max=138048, per=100.00%, avg=131557.33, stdev=5008.32, samples=9 00:13:25.109 iops : min=30844, max=34512, avg=32889.33, stdev=1252.08, samples=9 00:13:25.109 lat (usec) : 250=0.80%, 500=4.14%, 750=8.56%, 1000=12.58% 00:13:25.109 lat (msec) : 2=65.62%, 4=8.26%, 10=0.04% 00:13:25.109 cpu : usr=34.48%, sys=56.52%, ctx=12, majf=0, minf=764 00:13:25.109 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.6%, 16=23.9%, 32=61.1%, >=64=2.1% 00:13:25.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.109 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:25.109 issued rwts: total=163820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:25.109 00:13:25.109 Run status group 0 (all jobs): 00:13:25.109 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=640MiB (671MB), run=5001-5001msec 00:13:25.109 ----------------------------------------------------- 00:13:25.109 Suppressions used: 00:13:25.109 count bytes template 00:13:25.109 1 11 /usr/src/fio/parse.c 00:13:25.109 1 8 libtcmalloc_minimal.so 00:13:25.109 1 904 libcrypto.so 00:13:25.109 ----------------------------------------------------- 00:13:25.109 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:25.109 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.110 19:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.110 { 00:13:25.110 "subsystems": [ 00:13:25.110 { 00:13:25.110 "subsystem": "bdev", 00:13:25.110 "config": [ 00:13:25.110 { 00:13:25.110 "params": { 00:13:25.110 "io_mechanism": "libaio", 00:13:25.110 "conserve_cpu": true, 00:13:25.110 "filename": "/dev/nvme0n1", 00:13:25.110 "name": "xnvme_bdev" 00:13:25.110 }, 00:13:25.110 "method": "bdev_xnvme_create" 00:13:25.110 }, 00:13:25.110 { 00:13:25.110 "method": "bdev_wait_for_examine" 00:13:25.110 } 00:13:25.110 ] 00:13:25.110 } 00:13:25.110 ] 00:13:25.110 } 00:13:25.371 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:25.371 fio-3.35 00:13:25.371 Starting 1 thread 00:13:31.961 00:13:31.961 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69840: Wed Nov 27 19:11:40 2024 00:13:31.962 write: IOPS=33.4k, BW=130MiB/s (137MB/s)(652MiB/5001msec); 0 zone resets 00:13:31.962 slat (usec): min=4, max=1625, avg=24.27, stdev=88.91 00:13:31.962 clat (usec): min=103, max=5189, avg=1262.74, stdev=560.60 00:13:31.962 lat (usec): min=200, max=5262, avg=1287.01, stdev=554.23 00:13:31.962 clat percentiles (usec): 00:13:31.962 | 1.00th=[ 260], 5.00th=[ 429], 10.00th=[ 586], 20.00th=[ 783], 00:13:31.962 | 30.00th=[ 955], 40.00th=[ 1090], 50.00th=[ 1237], 60.00th=[ 1352], 00:13:31.962 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1942], 95.00th=[ 2212], 00:13:31.962 | 99.00th=[ 2999], 99.50th=[ 3425], 99.90th=[ 3916], 99.95th=[ 4113], 00:13:31.962 | 99.99th=[ 4359] 00:13:31.962 bw ( KiB/s): min=124952, max=149312, per=100.00%, avg=134981.33, stdev=7434.83, samples=9 00:13:31.962 iops : min=31238, max=37328, avg=33745.33, stdev=1858.71, samples=9 00:13:31.962 lat (usec) : 250=0.89%, 500=6.21%, 750=11.00%, 1000=15.24% 00:13:31.962 lat (msec) : 2=58.22%, 4=8.38%, 10=0.08% 00:13:31.962 cpu : usr=32.46%, sys=57.16%, ctx=8, majf=0, minf=765 00:13:31.962 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.7%, 16=24.4%, 32=60.9%, >=64=2.0% 00:13:31.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.962 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:31.962 issued rwts: total=0,166968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.962 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:31.962 00:13:31.962 Run status group 0 (all jobs): 00:13:31.962 WRITE: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=652MiB (684MB), run=5001-5001msec 00:13:32.224 ----------------------------------------------------- 00:13:32.224 Suppressions used: 00:13:32.224 count bytes template 00:13:32.224 1 11 /usr/src/fio/parse.c 00:13:32.224 1 8 libtcmalloc_minimal.so 00:13:32.224 1 904 libcrypto.so 00:13:32.224 ----------------------------------------------------- 00:13:32.224 00:13:32.224 00:13:32.224 real 0m13.922s 00:13:32.224 user 0m6.229s 00:13:32.224 sys 0m6.334s 00:13:32.224 19:11:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.224 19:11:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:32.224 ************************************ 00:13:32.224 END TEST xnvme_fio_plugin 00:13:32.224 ************************************ 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:32.224 19:11:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:32.224 19:11:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.224 19:11:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.224 19:11:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.224 ************************************ 00:13:32.224 START TEST xnvme_rpc 00:13:32.224 ************************************ 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69926 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69926 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69926 ']' 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:32.224 19:11:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.224 [2024-11-27 19:11:41.812984] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:32.224 [2024-11-27 19:11:41.813151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69926 ] 00:13:32.486 [2024-11-27 19:11:41.974893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.486 [2024-11-27 19:11:42.105852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 xnvme_bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69926 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69926 ']' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69926 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69926 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.431 killing process with pid 69926 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69926' 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69926 00:13:33.431 19:11:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69926 00:13:35.348 00:13:35.348 real 0m2.939s 00:13:35.348 user 0m2.940s 00:13:35.348 sys 0m0.490s 00:13:35.348 19:11:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.348 ************************************ 00:13:35.348 END TEST xnvme_rpc 00:13:35.348 ************************************ 00:13:35.348 19:11:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.348 19:11:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:35.348 19:11:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.348 19:11:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.348 19:11:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.348 ************************************ 00:13:35.348 START TEST xnvme_bdevperf 00:13:35.348 ************************************ 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.348 19:11:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.348 { 00:13:35.348 "subsystems": [ 00:13:35.348 { 00:13:35.348 "subsystem": "bdev", 00:13:35.348 "config": [ 00:13:35.348 { 00:13:35.348 "params": { 00:13:35.348 "io_mechanism": "io_uring", 00:13:35.348 "conserve_cpu": false, 00:13:35.348 "filename": "/dev/nvme0n1", 00:13:35.348 "name": "xnvme_bdev" 00:13:35.348 }, 00:13:35.348 "method": "bdev_xnvme_create" 00:13:35.348 }, 00:13:35.348 { 00:13:35.348 "method": "bdev_wait_for_examine" 00:13:35.348 } 00:13:35.348 ] 00:13:35.348 } 00:13:35.348 ] 00:13:35.348 } 00:13:35.348 [2024-11-27 19:11:44.812786] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:35.348 [2024-11-27 19:11:44.812937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69995 ] 00:13:35.348 [2024-11-27 19:11:44.976837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.609 [2024-11-27 19:11:45.110084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.870 Running I/O for 5 seconds... 00:13:38.200 34105.00 IOPS, 133.22 MiB/s [2024-11-27T19:11:48.778Z] 34210.50 IOPS, 133.63 MiB/s [2024-11-27T19:11:49.733Z] 34364.33 IOPS, 134.24 MiB/s [2024-11-27T19:11:50.739Z] 34435.00 IOPS, 134.51 MiB/s [2024-11-27T19:11:50.739Z] 34525.00 IOPS, 134.86 MiB/s 00:13:41.104 Latency(us) 00:13:41.104 [2024-11-27T19:11:50.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.104 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:41.104 xnvme_bdev : 5.01 34492.30 134.74 0.00 0.00 1850.93 327.68 36095.21 00:13:41.104 [2024-11-27T19:11:50.739Z] =================================================================================================================== 00:13:41.104 [2024-11-27T19:11:50.739Z] Total : 34492.30 134.74 0.00 0.00 1850.93 327.68 36095.21 00:13:41.678 19:11:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.678 19:11:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:41.678 19:11:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:41.678 19:11:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.678 19:11:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 { 00:13:41.678 "subsystems": [ 00:13:41.678 { 00:13:41.678 "subsystem": "bdev", 00:13:41.678 "config": [ 00:13:41.678 { 00:13:41.678 "params": { 00:13:41.678 "io_mechanism": "io_uring", 00:13:41.678 "conserve_cpu": false, 00:13:41.678 "filename": "/dev/nvme0n1", 00:13:41.678 "name": "xnvme_bdev" 00:13:41.678 }, 00:13:41.678 "method": "bdev_xnvme_create" 00:13:41.678 }, 00:13:41.678 { 00:13:41.678 "method": "bdev_wait_for_examine" 00:13:41.678 } 00:13:41.678 ] 00:13:41.678 } 00:13:41.678 ] 00:13:41.678 } 00:13:41.678 [2024-11-27 19:11:51.277669] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:41.678 [2024-11-27 19:11:51.277824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70070 ] 00:13:41.939 [2024-11-27 19:11:51.440865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.939 [2024-11-27 19:11:51.564685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.512 Running I/O for 5 seconds... 00:13:44.401 36364.00 IOPS, 142.05 MiB/s [2024-11-27T19:11:54.977Z] 36406.50 IOPS, 142.21 MiB/s [2024-11-27T19:11:55.918Z] 36255.33 IOPS, 141.62 MiB/s [2024-11-27T19:11:57.303Z] 35825.75 IOPS, 139.94 MiB/s [2024-11-27T19:11:57.303Z] 37466.80 IOPS, 146.35 MiB/s 00:13:47.668 Latency(us) 00:13:47.668 [2024-11-27T19:11:57.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.668 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:47.668 xnvme_bdev : 5.00 37444.37 146.27 0.00 0.00 1704.56 211.10 10838.65 00:13:47.668 [2024-11-27T19:11:57.303Z] =================================================================================================================== 00:13:47.668 [2024-11-27T19:11:57.303Z] Total : 37444.37 146.27 0.00 0.00 1704.56 211.10 10838.65 00:13:48.242 00:13:48.242 real 0m12.904s 00:13:48.242 user 0m5.839s 00:13:48.242 sys 0m6.756s 00:13:48.242 19:11:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.242 19:11:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:48.242 ************************************ 00:13:48.242 END TEST xnvme_bdevperf 00:13:48.242 ************************************ 00:13:48.242 19:11:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:48.242 19:11:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.242 19:11:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.242 19:11:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.242 ************************************ 00:13:48.242 START TEST xnvme_fio_plugin 00:13:48.242 ************************************ 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.242 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:48.243 19:11:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:48.243 { 00:13:48.243 "subsystems": [ 00:13:48.243 { 00:13:48.243 "subsystem": "bdev", 00:13:48.243 "config": [ 00:13:48.243 { 00:13:48.243 "params": { 00:13:48.243 "io_mechanism": "io_uring", 00:13:48.243 "conserve_cpu": false, 00:13:48.243 "filename": "/dev/nvme0n1", 00:13:48.243 "name": "xnvme_bdev" 00:13:48.243 }, 00:13:48.243 "method": "bdev_xnvme_create" 00:13:48.243 }, 00:13:48.243 { 00:13:48.243 "method": "bdev_wait_for_examine" 00:13:48.243 } 00:13:48.243 ] 00:13:48.243 } 00:13:48.243 ] 00:13:48.243 } 00:13:48.503 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:48.503 fio-3.35 00:13:48.503 Starting 1 thread 00:13:55.094 00:13:55.094 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70189: Wed Nov 27 19:12:03 2024 00:13:55.094 read: IOPS=41.4k, BW=162MiB/s (170MB/s)(809MiB/5001msec) 00:13:55.094 slat (nsec): min=2872, max=99987, avg=4019.03, stdev=1861.53 00:13:55.094 clat (usec): min=594, max=3604, avg=1383.95, stdev=321.19 00:13:55.094 lat (usec): min=597, max=3609, avg=1387.97, stdev=321.50 00:13:55.094 clat percentiles (usec): 00:13:55.094 | 1.00th=[ 799], 5.00th=[ 898], 10.00th=[ 963], 20.00th=[ 1074], 00:13:55.094 | 30.00th=[ 1172], 40.00th=[ 1303], 50.00th=[ 1401], 60.00th=[ 1483], 00:13:55.094 | 70.00th=[ 1565], 80.00th=[ 1647], 90.00th=[ 1778], 95.00th=[ 1909], 00:13:55.094 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 2835], 99.95th=[ 3097], 00:13:55.094 | 99.99th=[ 3359] 00:13:55.094 bw ( KiB/s): min=142051, max=193024, per=100.00%, avg=167960.33, stdev=22723.68, samples=9 00:13:55.094 iops : min=35512, max=48256, avg=41990.00, stdev=5681.03, samples=9 00:13:55.094 lat (usec) : 750=0.44%, 1000=12.75% 00:13:55.094 lat (msec) : 2=83.92%, 4=2.88% 00:13:55.094 cpu : usr=32.88%, sys=65.88%, ctx=15, majf=0, minf=762 00:13:55.094 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:55.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.094 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:55.094 issued rwts: total=207147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:55.094 00:13:55.094 Run status group 0 (all jobs): 00:13:55.094 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=809MiB (848MB), run=5001-5001msec 00:13:55.094 ----------------------------------------------------- 00:13:55.094 Suppressions used: 00:13:55.094 count bytes template 00:13:55.094 1 11 /usr/src/fio/parse.c 00:13:55.094 1 8 libtcmalloc_minimal.so 00:13:55.094 1 904 libcrypto.so 00:13:55.094 ----------------------------------------------------- 00:13:55.094 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:55.094 19:12:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:55.094 { 00:13:55.094 "subsystems": [ 00:13:55.094 { 00:13:55.094 "subsystem": "bdev", 00:13:55.094 "config": [ 00:13:55.094 { 00:13:55.094 "params": { 00:13:55.094 "io_mechanism": "io_uring", 00:13:55.094 "conserve_cpu": false, 00:13:55.095 "filename": "/dev/nvme0n1", 00:13:55.095 "name": "xnvme_bdev" 00:13:55.095 }, 00:13:55.095 "method": "bdev_xnvme_create" 00:13:55.095 }, 00:13:55.095 { 00:13:55.095 "method": "bdev_wait_for_examine" 00:13:55.095 } 00:13:55.095 ] 00:13:55.095 } 00:13:55.095 ] 00:13:55.095 } 00:13:55.356 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:55.356 fio-3.35 00:13:55.356 Starting 1 thread 00:14:01.955 00:14:01.955 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70281: Wed Nov 27 19:12:10 2024 00:14:01.955 write: IOPS=35.5k, BW=139MiB/s (145MB/s)(693MiB/5001msec); 0 zone resets 00:14:01.955 slat (usec): min=2, max=103, avg= 4.51, stdev= 2.41 00:14:01.955 clat (usec): min=425, max=6638, avg=1622.87, stdev=239.97 00:14:01.955 lat (usec): min=434, max=6643, avg=1627.38, stdev=240.60 00:14:01.955 clat percentiles (usec): 00:14:01.955 | 1.00th=[ 1237], 5.00th=[ 1336], 10.00th=[ 1385], 20.00th=[ 1434], 00:14:01.955 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1631], 00:14:01.955 | 70.00th=[ 1696], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2073], 00:14:01.955 | 99.00th=[ 2376], 99.50th=[ 2474], 99.90th=[ 2802], 99.95th=[ 3130], 00:14:01.955 | 99.99th=[ 5735] 00:14:01.955 bw ( KiB/s): min=137688, max=149168, per=100.00%, avg=142220.44, stdev=3812.38, samples=9 00:14:01.955 iops : min=34422, max=37292, avg=35555.11, stdev=953.10, samples=9 00:14:01.955 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:14:01.955 lat (msec) : 2=92.68%, 4=7.24%, 10=0.02% 00:14:01.955 cpu : usr=32.38%, sys=66.12%, ctx=11, majf=0, minf=763 00:14:01.955 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:14:01.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.955 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:01.955 issued rwts: total=0,177425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.955 00:14:01.955 Run status group 0 (all jobs): 00:14:01.955 WRITE: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=693MiB (727MB), run=5001-5001msec 00:14:01.955 ----------------------------------------------------- 00:14:01.955 Suppressions used: 00:14:01.955 count bytes template 00:14:01.955 1 11 /usr/src/fio/parse.c 00:14:01.955 1 8 libtcmalloc_minimal.so 00:14:01.955 1 904 libcrypto.so 00:14:01.955 ----------------------------------------------------- 00:14:01.955 00:14:01.955 ************************************ 00:14:01.955 END TEST xnvme_fio_plugin 00:14:01.955 ************************************ 00:14:01.955 00:14:01.955 real 0m13.731s 00:14:01.955 user 0m6.125s 00:14:01.955 sys 0m7.149s 00:14:01.955 19:12:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.955 19:12:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:01.955 19:12:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:01.955 19:12:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:01.955 19:12:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:01.955 19:12:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:01.955 19:12:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.955 19:12:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.955 19:12:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.955 ************************************ 00:14:01.955 START TEST xnvme_rpc 00:14:01.955 ************************************ 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70367 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70367 00:14:01.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70367 ']' 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.955 19:12:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:02.216 [2024-11-27 19:12:11.607327] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:02.216 [2024-11-27 19:12:11.607484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70367 ] 00:14:02.216 [2024-11-27 19:12:11.773763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.478 [2024-11-27 19:12:11.904594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.050 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.050 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:03.050 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.051 xnvme_bdev 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.051 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70367 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70367 ']' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70367 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70367 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.311 killing process with pid 70367 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70367' 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70367 00:14:03.311 19:12:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70367 00:14:05.227 ************************************ 00:14:05.227 END TEST xnvme_rpc 00:14:05.227 ************************************ 00:14:05.227 00:14:05.227 real 0m2.951s 00:14:05.227 user 0m2.946s 00:14:05.227 sys 0m0.503s 00:14:05.227 19:12:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.227 19:12:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 19:12:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:05.227 19:12:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.227 19:12:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.227 19:12:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 ************************************ 00:14:05.227 START TEST xnvme_bdevperf 00:14:05.227 ************************************ 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.227 19:12:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.228 { 00:14:05.228 "subsystems": [ 00:14:05.228 { 00:14:05.228 "subsystem": "bdev", 00:14:05.228 "config": [ 00:14:05.228 { 00:14:05.228 "params": { 00:14:05.228 "io_mechanism": "io_uring", 00:14:05.228 "conserve_cpu": true, 00:14:05.228 "filename": "/dev/nvme0n1", 00:14:05.228 "name": "xnvme_bdev" 00:14:05.228 }, 00:14:05.228 "method": "bdev_xnvme_create" 00:14:05.228 }, 00:14:05.228 { 00:14:05.228 "method": "bdev_wait_for_examine" 00:14:05.228 } 00:14:05.228 ] 00:14:05.228 } 00:14:05.228 ] 00:14:05.228 } 00:14:05.228 [2024-11-27 19:12:14.611215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:05.228 [2024-11-27 19:12:14.611362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70440 ] 00:14:05.228 [2024-11-27 19:12:14.777183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.489 [2024-11-27 19:12:14.904262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.751 Running I/O for 5 seconds... 00:14:07.639 34275.00 IOPS, 133.89 MiB/s [2024-11-27T19:12:18.217Z] 34221.00 IOPS, 133.68 MiB/s [2024-11-27T19:12:19.603Z] 37056.67 IOPS, 144.75 MiB/s [2024-11-27T19:12:20.547Z] 41382.25 IOPS, 161.65 MiB/s [2024-11-27T19:12:20.547Z] 43207.80 IOPS, 168.78 MiB/s 00:14:10.912 Latency(us) 00:14:10.912 [2024-11-27T19:12:20.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.912 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:10.912 xnvme_bdev : 5.01 43160.38 168.60 0.00 0.00 1478.44 441.11 11947.72 00:14:10.912 [2024-11-27T19:12:20.547Z] =================================================================================================================== 00:14:10.912 [2024-11-27T19:12:20.547Z] Total : 43160.38 168.60 0.00 0.00 1478.44 441.11 11947.72 00:14:11.484 19:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.484 19:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:11.484 19:12:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:11.484 19:12:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.484 19:12:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.484 { 00:14:11.484 "subsystems": [ 00:14:11.484 { 00:14:11.484 "subsystem": "bdev", 00:14:11.484 "config": [ 00:14:11.484 { 00:14:11.484 "params": { 00:14:11.484 "io_mechanism": "io_uring", 00:14:11.484 "conserve_cpu": true, 00:14:11.484 "filename": "/dev/nvme0n1", 00:14:11.484 "name": "xnvme_bdev" 00:14:11.484 }, 00:14:11.484 "method": "bdev_xnvme_create" 00:14:11.484 }, 00:14:11.484 { 00:14:11.484 "method": "bdev_wait_for_examine" 00:14:11.484 } 00:14:11.484 ] 00:14:11.484 } 00:14:11.484 ] 00:14:11.484 } 00:14:11.484 [2024-11-27 19:12:21.071279] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:11.484 [2024-11-27 19:12:21.071434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:14:11.745 [2024-11-27 19:12:21.237962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.006 [2024-11-27 19:12:21.380922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.267 Running I/O for 5 seconds... 00:14:14.199 35890.00 IOPS, 140.20 MiB/s [2024-11-27T19:12:24.777Z] 36947.00 IOPS, 144.32 MiB/s [2024-11-27T19:12:25.722Z] 36732.33 IOPS, 143.49 MiB/s [2024-11-27T19:12:27.108Z] 36744.25 IOPS, 143.53 MiB/s [2024-11-27T19:12:27.108Z] 36721.00 IOPS, 143.44 MiB/s 00:14:17.473 Latency(us) 00:14:17.473 [2024-11-27T19:12:27.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.473 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:17.473 xnvme_bdev : 5.01 36680.35 143.28 0.00 0.00 1739.29 129.18 11544.42 00:14:17.473 [2024-11-27T19:12:27.108Z] =================================================================================================================== 00:14:17.473 [2024-11-27T19:12:27.108Z] Total : 36680.35 143.28 0.00 0.00 1739.29 129.18 11544.42 00:14:18.046 00:14:18.046 real 0m13.061s 00:14:18.046 user 0m6.426s 00:14:18.046 sys 0m6.012s 00:14:18.046 19:12:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.046 ************************************ 00:14:18.046 END TEST xnvme_bdevperf 00:14:18.046 ************************************ 00:14:18.046 19:12:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.046 19:12:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:18.046 19:12:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.046 19:12:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.046 19:12:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.046 ************************************ 00:14:18.046 START TEST xnvme_fio_plugin 00:14:18.046 ************************************ 00:14:18.046 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:18.046 19:12:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.047 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:18.308 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.308 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.308 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:18.308 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.308 19:12:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.308 { 00:14:18.308 "subsystems": [ 00:14:18.308 { 00:14:18.308 "subsystem": "bdev", 00:14:18.308 "config": [ 00:14:18.308 { 00:14:18.308 "params": { 00:14:18.308 "io_mechanism": "io_uring", 00:14:18.308 "conserve_cpu": true, 00:14:18.308 "filename": "/dev/nvme0n1", 00:14:18.308 "name": "xnvme_bdev" 00:14:18.308 }, 00:14:18.308 "method": "bdev_xnvme_create" 00:14:18.308 }, 00:14:18.308 { 00:14:18.308 "method": "bdev_wait_for_examine" 00:14:18.308 } 00:14:18.308 ] 00:14:18.308 } 00:14:18.308 ] 00:14:18.308 } 00:14:18.308 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:18.308 fio-3.35 00:14:18.308 Starting 1 thread 00:14:24.901 00:14:24.901 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70636: Wed Nov 27 19:12:33 2024 00:14:24.901 read: IOPS=47.6k, BW=186MiB/s (195MB/s)(931MiB/5001msec) 00:14:24.901 slat (usec): min=2, max=137, avg= 4.12, stdev= 2.82 00:14:24.901 clat (usec): min=472, max=6349, avg=1184.41, stdev=340.64 00:14:24.901 lat (usec): min=475, max=6353, avg=1188.53, stdev=341.90 00:14:24.901 clat percentiles (usec): 00:14:24.901 | 1.00th=[ 734], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 914], 00:14:24.901 | 30.00th=[ 971], 40.00th=[ 1020], 50.00th=[ 1090], 60.00th=[ 1188], 00:14:24.901 | 70.00th=[ 1303], 80.00th=[ 1434], 90.00th=[ 1614], 95.00th=[ 1795], 00:14:24.901 | 99.00th=[ 2376], 99.50th=[ 2671], 99.90th=[ 3163], 99.95th=[ 3392], 00:14:24.901 | 99.99th=[ 4047] 00:14:24.901 bw ( KiB/s): min=147751, max=225280, per=98.74%, avg=188142.11, stdev=31056.15, samples=9 00:14:24.901 iops : min=36937, max=56320, avg=47035.44, stdev=7764.16, samples=9 00:14:24.901 lat (usec) : 500=0.01%, 750=1.47%, 1000=34.57% 00:14:24.901 lat (msec) : 2=61.41%, 4=2.54%, 10=0.01% 00:14:24.901 cpu : usr=38.22%, sys=58.32%, ctx=14, majf=0, minf=762 00:14:24.901 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:24.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.901 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:24.901 issued rwts: total=238237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.901 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.901 00:14:24.901 Run status group 0 (all jobs): 00:14:24.901 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=931MiB (976MB), run=5001-5001msec 00:14:25.163 ----------------------------------------------------- 00:14:25.163 Suppressions used: 00:14:25.163 count bytes template 00:14:25.163 1 11 /usr/src/fio/parse.c 00:14:25.163 1 8 libtcmalloc_minimal.so 00:14:25.163 1 904 libcrypto.so 00:14:25.163 ----------------------------------------------------- 00:14:25.163 00:14:25.163 19:12:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.163 19:12:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.164 19:12:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.164 { 00:14:25.164 "subsystems": [ 00:14:25.164 { 00:14:25.164 "subsystem": "bdev", 00:14:25.164 "config": [ 00:14:25.164 { 00:14:25.164 "params": { 00:14:25.164 "io_mechanism": "io_uring", 00:14:25.164 "conserve_cpu": true, 00:14:25.164 "filename": "/dev/nvme0n1", 00:14:25.164 "name": "xnvme_bdev" 00:14:25.164 }, 00:14:25.164 "method": "bdev_xnvme_create" 00:14:25.164 }, 00:14:25.164 { 00:14:25.164 "method": "bdev_wait_for_examine" 00:14:25.164 } 00:14:25.164 ] 00:14:25.164 } 00:14:25.164 ] 00:14:25.164 } 00:14:25.426 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:25.426 fio-3.35 00:14:25.426 Starting 1 thread 00:14:32.013 00:14:32.013 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70727: Wed Nov 27 19:12:40 2024 00:14:32.014 write: IOPS=36.1k, BW=141MiB/s (148MB/s)(706MiB/5002msec); 0 zone resets 00:14:32.014 slat (nsec): min=2892, max=66405, avg=4388.89, stdev=2140.17 00:14:32.014 clat (usec): min=354, max=8895, avg=1595.33, stdev=250.18 00:14:32.014 lat (usec): min=358, max=8899, avg=1599.71, stdev=250.63 00:14:32.014 clat percentiles (usec): 00:14:32.014 | 1.00th=[ 1074], 5.00th=[ 1254], 10.00th=[ 1336], 20.00th=[ 1418], 00:14:32.014 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1614], 00:14:32.014 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1909], 95.00th=[ 2040], 00:14:32.014 | 99.00th=[ 2311], 99.50th=[ 2409], 99.90th=[ 2704], 99.95th=[ 3097], 00:14:32.014 | 99.99th=[ 5342] 00:14:32.014 bw ( KiB/s): min=138752, max=151040, per=100.00%, avg=144677.11, stdev=4151.16, samples=9 00:14:32.014 iops : min=34688, max=37760, avg=36169.22, stdev=1037.79, samples=9 00:14:32.014 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.37% 00:14:32.014 lat (msec) : 2=93.43%, 4=6.15%, 10=0.02% 00:14:32.014 cpu : usr=37.59%, sys=57.89%, ctx=12, majf=0, minf=763 00:14:32.014 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:32.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.014 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:32.014 issued rwts: total=0,180616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:32.014 00:14:32.014 Run status group 0 (all jobs): 00:14:32.014 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=706MiB (740MB), run=5002-5002msec 00:14:32.276 ----------------------------------------------------- 00:14:32.276 Suppressions used: 00:14:32.276 count bytes template 00:14:32.276 1 11 /usr/src/fio/parse.c 00:14:32.276 1 8 libtcmalloc_minimal.so 00:14:32.276 1 904 libcrypto.so 00:14:32.276 ----------------------------------------------------- 00:14:32.276 00:14:32.276 00:14:32.276 real 0m14.037s 00:14:32.276 user 0m6.778s 00:14:32.276 sys 0m6.523s 00:14:32.276 19:12:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.276 ************************************ 00:14:32.276 END TEST xnvme_fio_plugin 00:14:32.276 ************************************ 00:14:32.276 19:12:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:32.276 19:12:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:32.276 19:12:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.276 19:12:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.276 19:12:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 ************************************ 00:14:32.276 START TEST xnvme_rpc 00:14:32.276 ************************************ 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70808 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70808 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70808 ']' 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.276 19:12:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 [2024-11-27 19:12:41.861346] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:32.276 [2024-11-27 19:12:41.861531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70808 ] 00:14:32.538 [2024-11-27 19:12:42.025214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.538 [2024-11-27 19:12:42.171033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.505 xnvme_bdev 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.505 19:12:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:33.505 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70808 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70808 ']' 00:14:33.506 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70808 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70808 00:14:33.767 killing process with pid 70808 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70808' 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70808 00:14:33.767 19:12:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70808 00:14:35.685 ************************************ 00:14:35.685 END TEST xnvme_rpc 00:14:35.685 ************************************ 00:14:35.685 00:14:35.685 real 0m3.224s 00:14:35.685 user 0m3.117s 00:14:35.685 sys 0m0.601s 00:14:35.685 19:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.685 19:12:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.685 19:12:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:35.685 19:12:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:35.685 19:12:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.685 19:12:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.685 ************************************ 00:14:35.685 START TEST xnvme_bdevperf 00:14:35.685 ************************************ 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:35.685 19:12:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:35.685 { 00:14:35.685 "subsystems": [ 00:14:35.685 { 00:14:35.685 "subsystem": "bdev", 00:14:35.685 "config": [ 00:14:35.685 { 00:14:35.685 "params": { 00:14:35.685 "io_mechanism": "io_uring_cmd", 00:14:35.685 "conserve_cpu": false, 00:14:35.685 "filename": "/dev/ng0n1", 00:14:35.685 "name": "xnvme_bdev" 00:14:35.685 }, 00:14:35.685 "method": "bdev_xnvme_create" 00:14:35.685 }, 00:14:35.685 { 00:14:35.685 "method": "bdev_wait_for_examine" 00:14:35.685 } 00:14:35.685 ] 00:14:35.685 } 00:14:35.685 ] 00:14:35.685 } 00:14:35.685 [2024-11-27 19:12:45.144996] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:35.685 [2024-11-27 19:12:45.145167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:14:35.685 [2024-11-27 19:12:45.312518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.947 [2024-11-27 19:12:45.453326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.208 Running I/O for 5 seconds... 00:14:38.164 38161.00 IOPS, 149.07 MiB/s [2024-11-27T19:12:49.184Z] 39404.00 IOPS, 153.92 MiB/s [2024-11-27T19:12:50.126Z] 40589.33 IOPS, 158.55 MiB/s [2024-11-27T19:12:51.069Z] 39496.25 IOPS, 154.28 MiB/s 00:14:41.434 Latency(us) 00:14:41.434 [2024-11-27T19:12:51.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.434 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:41.434 xnvme_bdev : 5.00 39983.57 156.19 0.00 0.00 1596.84 340.28 10687.41 00:14:41.434 [2024-11-27T19:12:51.069Z] =================================================================================================================== 00:14:41.434 [2024-11-27T19:12:51.069Z] Total : 39983.57 156.19 0.00 0.00 1596.84 340.28 10687.41 00:14:42.004 19:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.004 19:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:42.004 19:12:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.004 19:12:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.004 19:12:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.004 { 00:14:42.004 "subsystems": [ 00:14:42.004 { 00:14:42.004 "subsystem": "bdev", 00:14:42.004 "config": [ 00:14:42.004 { 00:14:42.004 "params": { 00:14:42.004 "io_mechanism": "io_uring_cmd", 00:14:42.004 "conserve_cpu": false, 00:14:42.004 "filename": "/dev/ng0n1", 00:14:42.004 "name": "xnvme_bdev" 00:14:42.004 }, 00:14:42.004 "method": "bdev_xnvme_create" 00:14:42.004 }, 00:14:42.004 { 00:14:42.004 "method": "bdev_wait_for_examine" 00:14:42.004 } 00:14:42.004 ] 00:14:42.004 } 00:14:42.004 ] 00:14:42.004 } 00:14:42.004 [2024-11-27 19:12:51.605505] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:42.004 [2024-11-27 19:12:51.605769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:14:42.264 [2024-11-27 19:12:51.763373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.264 [2024-11-27 19:12:51.870343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.526 Running I/O for 5 seconds... 00:14:44.853 40491.00 IOPS, 158.17 MiB/s [2024-11-27T19:12:55.431Z] 40939.50 IOPS, 159.92 MiB/s [2024-11-27T19:12:56.375Z] 41719.33 IOPS, 162.97 MiB/s [2024-11-27T19:12:57.318Z] 40767.00 IOPS, 159.25 MiB/s 00:14:47.683 Latency(us) 00:14:47.683 [2024-11-27T19:12:57.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.683 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:47.683 xnvme_bdev : 5.00 41024.68 160.25 0.00 0.00 1555.60 368.64 5570.56 00:14:47.683 [2024-11-27T19:12:57.318Z] =================================================================================================================== 00:14:47.683 [2024-11-27T19:12:57.318Z] Total : 41024.68 160.25 0.00 0.00 1555.60 368.64 5570.56 00:14:48.627 19:12:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.627 19:12:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:48.627 19:12:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.627 19:12:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.627 19:12:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.627 { 00:14:48.627 "subsystems": [ 00:14:48.627 { 00:14:48.627 "subsystem": "bdev", 00:14:48.627 "config": [ 00:14:48.627 { 00:14:48.627 "params": { 00:14:48.627 "io_mechanism": "io_uring_cmd", 00:14:48.627 "conserve_cpu": false, 00:14:48.627 "filename": "/dev/ng0n1", 00:14:48.627 "name": "xnvme_bdev" 00:14:48.627 }, 00:14:48.627 "method": "bdev_xnvme_create" 00:14:48.627 }, 00:14:48.627 { 00:14:48.627 "method": "bdev_wait_for_examine" 00:14:48.627 } 00:14:48.627 ] 00:14:48.627 } 00:14:48.627 ] 00:14:48.627 } 00:14:48.627 [2024-11-27 19:12:58.099986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:48.627 [2024-11-27 19:12:58.100409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71036 ] 00:14:48.889 [2024-11-27 19:12:58.263334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.889 [2024-11-27 19:12:58.403236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.164 Running I/O for 5 seconds... 00:14:51.530 80896.00 IOPS, 316.00 MiB/s [2024-11-27T19:13:02.107Z] 81024.00 IOPS, 316.50 MiB/s [2024-11-27T19:13:03.046Z] 80832.00 IOPS, 315.75 MiB/s [2024-11-27T19:13:03.988Z] 81408.00 IOPS, 318.00 MiB/s 00:14:54.353 Latency(us) 00:14:54.353 [2024-11-27T19:13:03.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.353 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:54.353 xnvme_bdev : 5.00 80696.03 315.22 0.00 0.00 789.24 513.58 7007.31 00:14:54.353 [2024-11-27T19:13:03.988Z] =================================================================================================================== 00:14:54.353 [2024-11-27T19:13:03.988Z] Total : 80696.03 315.22 0.00 0.00 789.24 513.58 7007.31 00:14:54.924 19:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.924 19:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:54.924 19:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.924 19:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.924 19:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.924 { 00:14:54.924 "subsystems": [ 00:14:54.924 { 00:14:54.924 "subsystem": "bdev", 00:14:54.924 "config": [ 00:14:54.924 { 00:14:54.924 "params": { 00:14:54.924 "io_mechanism": "io_uring_cmd", 00:14:54.924 "conserve_cpu": false, 00:14:54.924 "filename": "/dev/ng0n1", 00:14:54.924 "name": "xnvme_bdev" 00:14:54.924 }, 00:14:54.924 "method": "bdev_xnvme_create" 00:14:54.924 }, 00:14:54.924 { 00:14:54.924 "method": "bdev_wait_for_examine" 00:14:54.924 } 00:14:54.924 ] 00:14:54.924 } 00:14:54.924 ] 00:14:54.924 } 00:14:54.924 [2024-11-27 19:13:04.533240] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:54.924 [2024-11-27 19:13:04.533346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:14:55.183 [2024-11-27 19:13:04.688519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.184 [2024-11-27 19:13:04.773770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.442 Running I/O for 5 seconds... 00:14:57.760 8063.00 IOPS, 31.50 MiB/s [2024-11-27T19:13:08.331Z] 4747.50 IOPS, 18.54 MiB/s [2024-11-27T19:13:09.271Z] 14923.00 IOPS, 58.29 MiB/s [2024-11-27T19:13:10.207Z] 25385.25 IOPS, 99.16 MiB/s [2024-11-27T19:13:10.207Z] 31382.40 IOPS, 122.59 MiB/s 00:15:00.572 Latency(us) 00:15:00.572 [2024-11-27T19:13:10.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.572 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:00.572 xnvme_bdev : 5.00 31380.75 122.58 0.00 0.00 2035.97 76.80 175838.13 00:15:00.572 [2024-11-27T19:13:10.207Z] =================================================================================================================== 00:15:00.572 [2024-11-27T19:13:10.208Z] Total : 31380.75 122.58 0.00 0.00 2035.97 76.80 175838.13 00:15:01.144 00:15:01.144 real 0m25.673s 00:15:01.144 user 0m13.960s 00:15:01.144 sys 0m11.213s 00:15:01.144 19:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.144 ************************************ 00:15:01.144 END TEST xnvme_bdevperf 00:15:01.144 19:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.144 ************************************ 00:15:01.144 19:13:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:01.144 19:13:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.144 19:13:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.144 19:13:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 ************************************ 00:15:01.406 START TEST xnvme_fio_plugin 00:15:01.406 ************************************ 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.406 19:13:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.406 { 00:15:01.406 "subsystems": [ 00:15:01.406 { 00:15:01.406 "subsystem": "bdev", 00:15:01.406 "config": [ 00:15:01.406 { 00:15:01.406 "params": { 00:15:01.406 "io_mechanism": "io_uring_cmd", 00:15:01.406 "conserve_cpu": false, 00:15:01.406 "filename": "/dev/ng0n1", 00:15:01.406 "name": "xnvme_bdev" 00:15:01.406 }, 00:15:01.406 "method": "bdev_xnvme_create" 00:15:01.406 }, 00:15:01.406 { 00:15:01.406 "method": "bdev_wait_for_examine" 00:15:01.406 } 00:15:01.406 ] 00:15:01.406 } 00:15:01.406 ] 00:15:01.406 } 00:15:01.406 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.406 fio-3.35 00:15:01.406 Starting 1 thread 00:15:07.994 00:15:07.994 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71230: Wed Nov 27 19:13:16 2024 00:15:07.994 read: IOPS=51.9k, BW=203MiB/s (213MB/s)(1014MiB/5001msec) 00:15:07.994 slat (nsec): min=2876, max=47983, avg=3776.58, stdev=1083.40 00:15:07.994 clat (usec): min=523, max=2817, avg=1089.05, stdev=197.97 00:15:07.994 lat (usec): min=528, max=2821, avg=1092.82, stdev=198.15 00:15:07.994 clat percentiles (usec): 00:15:07.994 | 1.00th=[ 750], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 930], 00:15:07.994 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1106], 00:15:07.994 | 70.00th=[ 1156], 80.00th=[ 1237], 90.00th=[ 1352], 95.00th=[ 1450], 00:15:07.994 | 99.00th=[ 1680], 99.50th=[ 1811], 99.90th=[ 2057], 99.95th=[ 2311], 00:15:07.994 | 99.99th=[ 2769] 00:15:07.994 bw ( KiB/s): min=199680, max=215040, per=99.74%, avg=207093.33, stdev=5249.66, samples=9 00:15:07.994 iops : min=49920, max=53760, avg=51773.33, stdev=1312.41, samples=9 00:15:07.994 lat (usec) : 750=1.02%, 1000=34.87% 00:15:07.994 lat (msec) : 2=63.95%, 4=0.15% 00:15:07.994 cpu : usr=37.26%, sys=62.08%, ctx=9, majf=0, minf=762 00:15:07.994 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:07.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.994 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:07.994 issued rwts: total=259604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:07.994 00:15:07.994 Run status group 0 (all jobs): 00:15:07.994 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1014MiB (1063MB), run=5001-5001msec 00:15:07.994 ----------------------------------------------------- 00:15:07.994 Suppressions used: 00:15:07.994 count bytes template 00:15:07.994 1 11 /usr/src/fio/parse.c 00:15:07.994 1 8 libtcmalloc_minimal.so 00:15:07.994 1 904 libcrypto.so 00:15:07.994 ----------------------------------------------------- 00:15:07.994 00:15:07.994 19:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.994 19:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.995 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:08.256 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.256 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.256 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:08.256 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.256 19:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.256 { 00:15:08.256 "subsystems": [ 00:15:08.256 { 00:15:08.256 "subsystem": "bdev", 00:15:08.256 "config": [ 00:15:08.256 { 00:15:08.256 "params": { 00:15:08.256 "io_mechanism": "io_uring_cmd", 00:15:08.256 "conserve_cpu": false, 00:15:08.256 "filename": "/dev/ng0n1", 00:15:08.256 "name": "xnvme_bdev" 00:15:08.256 }, 00:15:08.256 "method": "bdev_xnvme_create" 00:15:08.256 }, 00:15:08.256 { 00:15:08.256 "method": "bdev_wait_for_examine" 00:15:08.256 } 00:15:08.256 ] 00:15:08.256 } 00:15:08.256 ] 00:15:08.256 } 00:15:08.256 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:08.256 fio-3.35 00:15:08.256 Starting 1 thread 00:15:14.845 00:15:14.845 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71320: Wed Nov 27 19:13:23 2024 00:15:14.845 write: IOPS=52.0k, BW=203MiB/s (213MB/s)(1015MiB/5001msec); 0 zone resets 00:15:14.845 slat (nsec): min=2909, max=95005, avg=4079.38, stdev=1397.58 00:15:14.845 clat (usec): min=59, max=5430, avg=1077.64, stdev=217.42 00:15:14.845 lat (usec): min=64, max=5439, avg=1081.72, stdev=217.80 00:15:14.845 clat percentiles (usec): 00:15:14.845 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 914], 00:15:14.845 | 30.00th=[ 955], 40.00th=[ 996], 50.00th=[ 1037], 60.00th=[ 1090], 00:15:14.845 | 70.00th=[ 1139], 80.00th=[ 1221], 90.00th=[ 1369], 95.00th=[ 1500], 00:15:14.845 | 99.00th=[ 1795], 99.50th=[ 1909], 99.90th=[ 2180], 99.95th=[ 2278], 00:15:14.845 | 99.99th=[ 3163] 00:15:14.845 bw ( KiB/s): min=187608, max=219664, per=100.00%, avg=210266.33, stdev=9466.32, samples=9 00:15:14.845 iops : min=46902, max=54916, avg=52566.56, stdev=2366.57, samples=9 00:15:14.845 lat (usec) : 100=0.01%, 250=0.01%, 500=0.08%, 750=1.62%, 1000=39.01% 00:15:14.845 lat (msec) : 2=58.99%, 4=0.28%, 10=0.01% 00:15:14.845 cpu : usr=36.58%, sys=62.58%, ctx=34, majf=0, minf=763 00:15:14.845 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.3%, 16=24.7%, 32=50.7%, >=64=1.6% 00:15:14.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.845 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:14.845 issued rwts: total=0,259808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:14.845 00:15:14.845 Run status group 0 (all jobs): 00:15:14.845 WRITE: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1015MiB (1064MB), run=5001-5001msec 00:15:14.845 ----------------------------------------------------- 00:15:14.845 Suppressions used: 00:15:14.845 count bytes template 00:15:14.845 1 11 /usr/src/fio/parse.c 00:15:14.845 1 8 libtcmalloc_minimal.so 00:15:14.845 1 904 libcrypto.so 00:15:14.845 ----------------------------------------------------- 00:15:14.845 00:15:14.845 00:15:14.845 real 0m13.638s 00:15:14.845 user 0m6.412s 00:15:14.845 sys 0m6.816s 00:15:14.845 19:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.845 ************************************ 00:15:14.845 END TEST xnvme_fio_plugin 00:15:14.845 19:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.845 ************************************ 00:15:14.846 19:13:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:14.846 19:13:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:14.846 19:13:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:14.846 19:13:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:14.846 19:13:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.846 19:13:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.846 19:13:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.846 ************************************ 00:15:14.846 START TEST xnvme_rpc 00:15:14.846 ************************************ 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:14.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71400 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71400 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71400 ']' 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.846 19:13:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.106 [2024-11-27 19:13:24.551774] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:15.106 [2024-11-27 19:13:24.551889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71400 ] 00:15:15.106 [2024-11-27 19:13:24.711352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.368 [2024-11-27 19:13:24.826392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 xnvme_bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71400 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71400 ']' 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71400 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:16.311 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71400 00:15:16.312 killing process with pid 71400 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71400' 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71400 00:15:16.312 19:13:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71400 00:15:18.226 00:15:18.226 real 0m3.078s 00:15:18.226 user 0m3.007s 00:15:18.226 sys 0m0.548s 00:15:18.227 19:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.227 ************************************ 00:15:18.227 19:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 END TEST xnvme_rpc 00:15:18.227 ************************************ 00:15:18.227 19:13:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:18.227 19:13:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.227 19:13:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.227 19:13:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 ************************************ 00:15:18.227 START TEST xnvme_bdevperf 00:15:18.227 ************************************ 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:18.227 19:13:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 { 00:15:18.227 "subsystems": [ 00:15:18.227 { 00:15:18.227 "subsystem": "bdev", 00:15:18.227 "config": [ 00:15:18.227 { 00:15:18.227 "params": { 00:15:18.227 "io_mechanism": "io_uring_cmd", 00:15:18.227 "conserve_cpu": true, 00:15:18.227 "filename": "/dev/ng0n1", 00:15:18.227 "name": "xnvme_bdev" 00:15:18.227 }, 00:15:18.227 "method": "bdev_xnvme_create" 00:15:18.227 }, 00:15:18.227 { 00:15:18.227 "method": "bdev_wait_for_examine" 00:15:18.227 } 00:15:18.227 ] 00:15:18.227 } 00:15:18.227 ] 00:15:18.227 } 00:15:18.227 [2024-11-27 19:13:27.686839] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:18.227 [2024-11-27 19:13:27.687115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71474 ] 00:15:18.227 [2024-11-27 19:13:27.846583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.487 [2024-11-27 19:13:27.952620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.748 Running I/O for 5 seconds... 00:15:20.633 37568.00 IOPS, 146.75 MiB/s [2024-11-27T19:13:31.654Z] 40160.00 IOPS, 156.88 MiB/s [2024-11-27T19:13:32.598Z] 40426.67 IOPS, 157.92 MiB/s [2024-11-27T19:13:33.543Z] 41216.00 IOPS, 161.00 MiB/s 00:15:23.908 Latency(us) 00:15:23.908 [2024-11-27T19:13:33.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.908 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:23.908 xnvme_bdev : 5.00 41743.40 163.06 0.00 0.00 1529.51 718.38 4209.43 00:15:23.908 [2024-11-27T19:13:33.543Z] =================================================================================================================== 00:15:23.908 [2024-11-27T19:13:33.543Z] Total : 41743.40 163.06 0.00 0.00 1529.51 718.38 4209.43 00:15:24.853 19:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.853 19:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:24.853 19:13:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:24.853 19:13:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:24.853 19:13:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.853 { 00:15:24.853 "subsystems": [ 00:15:24.853 { 00:15:24.853 "subsystem": "bdev", 00:15:24.853 "config": [ 00:15:24.853 { 00:15:24.853 "params": { 00:15:24.853 "io_mechanism": "io_uring_cmd", 00:15:24.853 "conserve_cpu": true, 00:15:24.853 "filename": "/dev/ng0n1", 00:15:24.853 "name": "xnvme_bdev" 00:15:24.853 }, 00:15:24.853 "method": "bdev_xnvme_create" 00:15:24.853 }, 00:15:24.853 { 00:15:24.853 "method": "bdev_wait_for_examine" 00:15:24.853 } 00:15:24.853 ] 00:15:24.853 } 00:15:24.853 ] 00:15:24.853 } 00:15:24.853 [2024-11-27 19:13:34.213160] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:24.853 [2024-11-27 19:13:34.213312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71554 ] 00:15:24.853 [2024-11-27 19:13:34.385860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.115 [2024-11-27 19:13:34.519284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.375 Running I/O for 5 seconds... 00:15:27.258 42388.00 IOPS, 165.58 MiB/s [2024-11-27T19:13:37.836Z] 41961.00 IOPS, 163.91 MiB/s [2024-11-27T19:13:39.295Z] 43313.67 IOPS, 169.19 MiB/s [2024-11-27T19:13:39.869Z] 42674.00 IOPS, 166.70 MiB/s [2024-11-27T19:13:39.869Z] 38538.00 IOPS, 150.54 MiB/s 00:15:30.234 Latency(us) 00:15:30.234 [2024-11-27T19:13:39.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.234 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:30.234 xnvme_bdev : 5.01 38478.61 150.31 0.00 0.00 1657.55 81.13 29239.14 00:15:30.234 [2024-11-27T19:13:39.869Z] =================================================================================================================== 00:15:30.234 [2024-11-27T19:13:39.869Z] Total : 38478.61 150.31 0.00 0.00 1657.55 81.13 29239.14 00:15:31.177 19:13:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.177 19:13:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:31.177 19:13:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:31.177 19:13:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:31.177 19:13:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.177 { 00:15:31.177 "subsystems": [ 00:15:31.177 { 00:15:31.177 "subsystem": "bdev", 00:15:31.177 "config": [ 00:15:31.177 { 00:15:31.177 "params": { 00:15:31.177 "io_mechanism": "io_uring_cmd", 00:15:31.177 "conserve_cpu": true, 00:15:31.177 "filename": "/dev/ng0n1", 00:15:31.177 "name": "xnvme_bdev" 00:15:31.177 }, 00:15:31.177 "method": "bdev_xnvme_create" 00:15:31.177 }, 00:15:31.177 { 00:15:31.177 "method": "bdev_wait_for_examine" 00:15:31.177 } 00:15:31.177 ] 00:15:31.177 } 00:15:31.177 ] 00:15:31.178 } 00:15:31.178 [2024-11-27 19:13:40.704070] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:31.178 [2024-11-27 19:13:40.704416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71628 ] 00:15:31.439 [2024-11-27 19:13:40.861395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.439 [2024-11-27 19:13:40.991528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.702 Running I/O for 5 seconds... 00:15:33.664 80256.00 IOPS, 313.50 MiB/s [2024-11-27T19:13:44.688Z] 78080.00 IOPS, 305.00 MiB/s [2024-11-27T19:13:45.630Z] 76117.33 IOPS, 297.33 MiB/s [2024-11-27T19:13:46.571Z] 76336.00 IOPS, 298.19 MiB/s 00:15:36.936 Latency(us) 00:15:36.936 [2024-11-27T19:13:46.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.936 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:36.936 xnvme_bdev : 5.00 76411.40 298.48 0.00 0.00 833.97 453.71 2823.09 00:15:36.936 [2024-11-27T19:13:46.571Z] =================================================================================================================== 00:15:36.936 [2024-11-27T19:13:46.571Z] Total : 76411.40 298.48 0.00 0.00 833.97 453.71 2823.09 00:15:37.509 19:13:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.509 19:13:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:37.509 19:13:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:37.509 19:13:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:37.509 19:13:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.509 { 00:15:37.509 "subsystems": [ 00:15:37.509 { 00:15:37.509 "subsystem": "bdev", 00:15:37.509 "config": [ 00:15:37.509 { 00:15:37.509 "params": { 00:15:37.509 "io_mechanism": "io_uring_cmd", 00:15:37.509 "conserve_cpu": true, 00:15:37.509 "filename": "/dev/ng0n1", 00:15:37.509 "name": "xnvme_bdev" 00:15:37.509 }, 00:15:37.509 "method": "bdev_xnvme_create" 00:15:37.509 }, 00:15:37.509 { 00:15:37.509 "method": "bdev_wait_for_examine" 00:15:37.509 } 00:15:37.509 ] 00:15:37.509 } 00:15:37.509 ] 00:15:37.509 } 00:15:37.509 [2024-11-27 19:13:47.122665] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:37.509 [2024-11-27 19:13:47.122938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:15:37.770 [2024-11-27 19:13:47.284051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.770 [2024-11-27 19:13:47.388463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.030 Running I/O for 5 seconds... 00:15:40.356 42755.00 IOPS, 167.01 MiB/s [2024-11-27T19:13:50.936Z] 43765.00 IOPS, 170.96 MiB/s [2024-11-27T19:13:51.874Z] 44007.00 IOPS, 171.90 MiB/s [2024-11-27T19:13:52.810Z] 43315.50 IOPS, 169.20 MiB/s [2024-11-27T19:13:52.810Z] 42300.60 IOPS, 165.24 MiB/s 00:15:43.175 Latency(us) 00:15:43.175 [2024-11-27T19:13:52.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.175 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:43.175 xnvme_bdev : 5.01 42268.54 165.11 0.00 0.00 1509.06 228.43 21374.82 00:15:43.175 [2024-11-27T19:13:52.810Z] =================================================================================================================== 00:15:43.175 [2024-11-27T19:13:52.810Z] Total : 42268.54 165.11 0.00 0.00 1509.06 228.43 21374.82 00:15:44.117 00:15:44.117 real 0m25.920s 00:15:44.117 user 0m17.557s 00:15:44.117 sys 0m5.975s 00:15:44.117 19:13:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.117 19:13:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.117 ************************************ 00:15:44.117 END TEST xnvme_bdevperf 00:15:44.117 ************************************ 00:15:44.117 19:13:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:44.117 19:13:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.117 19:13:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.117 19:13:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.117 ************************************ 00:15:44.117 START TEST xnvme_fio_plugin 00:15:44.117 ************************************ 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.117 19:13:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.117 { 00:15:44.117 "subsystems": [ 00:15:44.117 { 00:15:44.117 "subsystem": "bdev", 00:15:44.117 "config": [ 00:15:44.117 { 00:15:44.117 "params": { 00:15:44.117 "io_mechanism": "io_uring_cmd", 00:15:44.117 "conserve_cpu": true, 00:15:44.117 "filename": "/dev/ng0n1", 00:15:44.117 "name": "xnvme_bdev" 00:15:44.117 }, 00:15:44.117 "method": "bdev_xnvme_create" 00:15:44.117 }, 00:15:44.117 { 00:15:44.117 "method": "bdev_wait_for_examine" 00:15:44.117 } 00:15:44.117 ] 00:15:44.117 } 00:15:44.117 ] 00:15:44.117 } 00:15:44.378 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.378 fio-3.35 00:15:44.378 Starting 1 thread 00:15:51.056 00:15:51.056 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71815: Wed Nov 27 19:13:59 2024 00:15:51.056 read: IOPS=40.7k, BW=159MiB/s (167MB/s)(795MiB/5001msec) 00:15:51.056 slat (usec): min=2, max=560, avg= 3.54, stdev= 2.19 00:15:51.056 clat (usec): min=879, max=3040, avg=1432.24, stdev=257.33 00:15:51.056 lat (usec): min=882, max=3072, avg=1435.78, stdev=257.94 00:15:51.056 clat percentiles (usec): 00:15:51.056 | 1.00th=[ 1020], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1205], 00:15:51.056 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1401], 60.00th=[ 1467], 00:15:51.056 | 70.00th=[ 1532], 80.00th=[ 1614], 90.00th=[ 1762], 95.00th=[ 1909], 00:15:51.056 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2606], 99.95th=[ 2704], 00:15:51.056 | 99.99th=[ 2933] 00:15:51.056 bw ( KiB/s): min=145920, max=171008, per=100.00%, avg=163726.22, stdev=8361.36, samples=9 00:15:51.056 iops : min=36480, max=42752, avg=40931.56, stdev=2090.34, samples=9 00:15:51.056 lat (usec) : 1000=0.60% 00:15:51.056 lat (msec) : 2=96.07%, 4=3.33% 00:15:51.056 cpu : usr=61.36%, sys=35.72%, ctx=18, majf=0, minf=762 00:15:51.056 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:51.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.056 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:51.056 issued rwts: total=203456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:51.056 00:15:51.056 Run status group 0 (all jobs): 00:15:51.056 READ: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=795MiB (833MB), run=5001-5001msec 00:15:51.056 ----------------------------------------------------- 00:15:51.056 Suppressions used: 00:15:51.056 count bytes template 00:15:51.056 1 11 /usr/src/fio/parse.c 00:15:51.056 1 8 libtcmalloc_minimal.so 00:15:51.056 1 904 libcrypto.so 00:15:51.056 ----------------------------------------------------- 00:15:51.056 00:15:51.056 19:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.318 19:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.318 { 00:15:51.318 "subsystems": [ 00:15:51.318 { 00:15:51.318 "subsystem": "bdev", 00:15:51.318 "config": [ 00:15:51.318 { 00:15:51.318 "params": { 00:15:51.318 "io_mechanism": "io_uring_cmd", 00:15:51.318 "conserve_cpu": true, 00:15:51.318 "filename": "/dev/ng0n1", 00:15:51.318 "name": "xnvme_bdev" 00:15:51.318 }, 00:15:51.318 "method": "bdev_xnvme_create" 00:15:51.318 }, 00:15:51.318 { 00:15:51.318 "method": "bdev_wait_for_examine" 00:15:51.318 } 00:15:51.318 ] 00:15:51.318 } 00:15:51.318 ] 00:15:51.318 } 00:15:51.318 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:51.318 fio-3.35 00:15:51.318 Starting 1 thread 00:15:57.902 00:15:57.902 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71909: Wed Nov 27 19:14:06 2024 00:15:57.902 write: IOPS=40.2k, BW=157MiB/s (165MB/s)(786MiB/5001msec); 0 zone resets 00:15:57.902 slat (usec): min=2, max=109, avg= 4.12, stdev= 2.09 00:15:57.902 clat (usec): min=383, max=5624, avg=1428.07, stdev=251.66 00:15:57.903 lat (usec): min=387, max=5628, avg=1432.19, stdev=252.13 00:15:57.903 clat percentiles (usec): 00:15:57.903 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1221], 00:15:57.903 | 30.00th=[ 1287], 40.00th=[ 1336], 50.00th=[ 1401], 60.00th=[ 1450], 00:15:57.903 | 70.00th=[ 1516], 80.00th=[ 1598], 90.00th=[ 1729], 95.00th=[ 1844], 00:15:57.903 | 99.00th=[ 2180], 99.50th=[ 2343], 99.90th=[ 3130], 99.95th=[ 3490], 00:15:57.903 | 99.99th=[ 4490] 00:15:57.903 bw ( KiB/s): min=148896, max=181392, per=100.00%, avg=162263.11, stdev=11536.75, samples=9 00:15:57.903 iops : min=37224, max=45348, avg=40565.78, stdev=2884.19, samples=9 00:15:57.903 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.60% 00:15:57.903 lat (msec) : 2=97.16%, 4=2.22%, 10=0.02% 00:15:57.903 cpu : usr=53.96%, sys=39.76%, ctx=14, majf=0, minf=763 00:15:57.903 IO depths : 1=1.4%, 2=2.9%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.5%, >=64=1.7% 00:15:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.903 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:57.903 issued rwts: total=0,201273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:57.903 00:15:57.903 Run status group 0 (all jobs): 00:15:57.903 WRITE: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=786MiB (824MB), run=5001-5001msec 00:15:58.164 ----------------------------------------------------- 00:15:58.164 Suppressions used: 00:15:58.164 count bytes template 00:15:58.164 1 11 /usr/src/fio/parse.c 00:15:58.164 1 8 libtcmalloc_minimal.so 00:15:58.164 1 904 libcrypto.so 00:15:58.164 ----------------------------------------------------- 00:15:58.164 00:15:58.164 00:15:58.164 real 0m14.057s 00:15:58.164 user 0m8.763s 00:15:58.164 sys 0m4.495s 00:15:58.164 ************************************ 00:15:58.164 END TEST xnvme_fio_plugin 00:15:58.164 ************************************ 00:15:58.164 19:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.164 19:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 19:14:07 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71400 00:15:58.164 19:14:07 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71400 ']' 00:15:58.164 Process with pid 71400 is not found 00:15:58.164 19:14:07 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71400 00:15:58.164 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71400) - No such process 00:15:58.164 19:14:07 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71400 is not found' 00:15:58.164 00:15:58.164 real 3m33.141s 00:15:58.164 user 1m52.844s 00:15:58.164 sys 1m25.072s 00:15:58.164 ************************************ 00:15:58.164 END TEST nvme_xnvme 00:15:58.164 ************************************ 00:15:58.164 19:14:07 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.164 19:14:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 19:14:07 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:58.164 19:14:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.164 19:14:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.164 19:14:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.164 ************************************ 00:15:58.164 START TEST blockdev_xnvme 00:15:58.164 ************************************ 00:15:58.164 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:58.425 * Looking for test storage... 00:15:58.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.425 19:14:07 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:58.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.425 --rc genhtml_branch_coverage=1 00:15:58.425 --rc genhtml_function_coverage=1 00:15:58.425 --rc genhtml_legend=1 00:15:58.425 --rc geninfo_all_blocks=1 00:15:58.425 --rc geninfo_unexecuted_blocks=1 00:15:58.425 00:15:58.425 ' 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:58.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.425 --rc genhtml_branch_coverage=1 00:15:58.425 --rc genhtml_function_coverage=1 00:15:58.425 --rc genhtml_legend=1 00:15:58.425 --rc geninfo_all_blocks=1 00:15:58.425 --rc geninfo_unexecuted_blocks=1 00:15:58.425 00:15:58.425 ' 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:58.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.425 --rc genhtml_branch_coverage=1 00:15:58.425 --rc genhtml_function_coverage=1 00:15:58.425 --rc genhtml_legend=1 00:15:58.425 --rc geninfo_all_blocks=1 00:15:58.425 --rc geninfo_unexecuted_blocks=1 00:15:58.425 00:15:58.425 ' 00:15:58.425 19:14:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:58.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.425 --rc genhtml_branch_coverage=1 00:15:58.425 --rc genhtml_function_coverage=1 00:15:58.425 --rc genhtml_legend=1 00:15:58.425 --rc geninfo_all_blocks=1 00:15:58.425 --rc geninfo_unexecuted_blocks=1 00:15:58.425 00:15:58.425 ' 00:15:58.425 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:58.425 19:14:07 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:58.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72048 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72048 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72048 ']' 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.426 19:14:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.426 19:14:07 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:58.426 [2024-11-27 19:14:08.057885] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:58.426 [2024-11-27 19:14:08.058041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72048 ] 00:15:58.687 [2024-11-27 19:14:08.222642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.948 [2024-11-27 19:14:08.368093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.892 19:14:09 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.892 19:14:09 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:59.892 19:14:09 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:59.892 19:14:09 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:59.892 19:14:09 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:59.892 19:14:09 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:59.892 19:14:09 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:00.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:00.727 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:00.727 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:00.727 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:00.727 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.727 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.727 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:00.727 nvme0n1 00:16:00.727 nvme0n2 00:16:00.727 nvme0n3 00:16:00.727 nvme1n1 00:16:00.727 nvme2n1 00:16:00.989 nvme3n1 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.989 19:14:10 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:00.989 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:00.990 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8950858a-f74a-4f88-b5b3-8f0c8699e097"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8950858a-f74a-4f88-b5b3-8f0c8699e097",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e269c4de-3689-428c-be7c-405d5c1fdc9e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e269c4de-3689-428c-be7c-405d5c1fdc9e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "72a543c1-9a50-4306-985b-0b09d1340429"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72a543c1-9a50-4306-985b-0b09d1340429",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9f47d458-7877-46f7-9027-f30147e7b1e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9f47d458-7877-46f7-9027-f30147e7b1e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cd3fd121-bd09-4c99-9986-49db85c58f1e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cd3fd121-bd09-4c99-9986-49db85c58f1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "36083079-95bc-415c-864d-b5e6292a99eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "36083079-95bc-415c-864d-b5e6292a99eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:00.990 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:00.990 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:00.990 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:00.990 19:14:10 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72048 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72048 ']' 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72048 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72048 00:16:00.990 killing process with pid 72048 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72048' 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72048 00:16:00.990 19:14:10 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72048 00:16:02.905 19:14:12 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:02.905 19:14:12 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:02.905 19:14:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:02.905 19:14:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.905 19:14:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.905 ************************************ 00:16:02.905 START TEST bdev_hello_world 00:16:02.905 ************************************ 00:16:02.905 19:14:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:02.905 [2024-11-27 19:14:12.429987] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:02.905 [2024-11-27 19:14:12.430166] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72332 ] 00:16:03.164 [2024-11-27 19:14:12.589483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.164 [2024-11-27 19:14:12.684504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.423 [2024-11-27 19:14:13.001655] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:03.423 [2024-11-27 19:14:13.001856] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:03.423 [2024-11-27 19:14:13.001879] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:03.423 [2024-11-27 19:14:13.003704] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:03.423 [2024-11-27 19:14:13.004593] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:03.423 [2024-11-27 19:14:13.004633] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:03.423 [2024-11-27 19:14:13.005566] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:03.423 00:16:03.423 [2024-11-27 19:14:13.005597] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:03.990 00:16:03.990 real 0m1.247s 00:16:03.990 user 0m0.919s 00:16:03.990 sys 0m0.204s 00:16:03.990 19:14:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.990 19:14:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:03.990 ************************************ 00:16:03.990 END TEST bdev_hello_world 00:16:03.990 ************************************ 00:16:04.249 19:14:13 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:04.249 19:14:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.249 19:14:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.249 19:14:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.249 ************************************ 00:16:04.249 START TEST bdev_bounds 00:16:04.249 ************************************ 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72363 00:16:04.249 Process bdevio pid: 72363 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72363' 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72363 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72363 ']' 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:04.249 19:14:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:04.249 [2024-11-27 19:14:13.724725] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:04.249 [2024-11-27 19:14:13.724828] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:16:04.249 [2024-11-27 19:14:13.876807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.508 [2024-11-27 19:14:13.971619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.508 [2024-11-27 19:14:13.971906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.508 [2024-11-27 19:14:13.971907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.074 19:14:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.074 19:14:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:05.074 19:14:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:05.333 I/O targets: 00:16:05.333 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:05.333 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:05.333 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:05.333 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:05.333 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:05.333 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:05.333 00:16:05.333 00:16:05.333 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.333 http://cunit.sourceforge.net/ 00:16:05.333 00:16:05.333 00:16:05.333 Suite: bdevio tests on: nvme3n1 00:16:05.333 Test: blockdev write read block ...passed 00:16:05.333 Test: blockdev write zeroes read block ...passed 00:16:05.333 Test: blockdev write zeroes read no split ...passed 00:16:05.333 Test: blockdev write zeroes read split ...passed 00:16:05.333 Test: blockdev write zeroes read split partial ...passed 00:16:05.333 Test: blockdev reset ...passed 00:16:05.333 Test: blockdev write read 8 blocks ...passed 00:16:05.333 Test: blockdev write read size > 128k ...passed 00:16:05.333 Test: blockdev write read invalid size ...passed 00:16:05.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.333 Test: blockdev write read max offset ...passed 00:16:05.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.333 Test: blockdev writev readv 8 blocks ...passed 00:16:05.333 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.333 Test: blockdev writev readv block ...passed 00:16:05.333 Test: blockdev writev readv size > 128k ...passed 00:16:05.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.333 Test: blockdev comparev and writev ...passed 00:16:05.333 Test: blockdev nvme passthru rw ...passed 00:16:05.333 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.333 Test: blockdev nvme admin passthru ...passed 00:16:05.333 Test: blockdev copy ...passed 00:16:05.333 Suite: bdevio tests on: nvme2n1 00:16:05.333 Test: blockdev write read block ...passed 00:16:05.333 Test: blockdev write zeroes read block ...passed 00:16:05.333 Test: blockdev write zeroes read no split ...passed 00:16:05.333 Test: blockdev write zeroes read split ...passed 00:16:05.333 Test: blockdev write zeroes read split partial ...passed 00:16:05.333 Test: blockdev reset ...passed 00:16:05.333 Test: blockdev write read 8 blocks ...passed 00:16:05.333 Test: blockdev write read size > 128k ...passed 00:16:05.333 Test: blockdev write read invalid size ...passed 00:16:05.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.333 Test: blockdev write read max offset ...passed 00:16:05.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.333 Test: blockdev writev readv 8 blocks ...passed 00:16:05.333 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.333 Test: blockdev writev readv block ...passed 00:16:05.333 Test: blockdev writev readv size > 128k ...passed 00:16:05.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.333 Test: blockdev comparev and writev ...passed 00:16:05.333 Test: blockdev nvme passthru rw ...passed 00:16:05.333 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.333 Test: blockdev nvme admin passthru ...passed 00:16:05.333 Test: blockdev copy ...passed 00:16:05.333 Suite: bdevio tests on: nvme1n1 00:16:05.333 Test: blockdev write read block ...passed 00:16:05.333 Test: blockdev write zeroes read block ...passed 00:16:05.333 Test: blockdev write zeroes read no split ...passed 00:16:05.333 Test: blockdev write zeroes read split ...passed 00:16:05.333 Test: blockdev write zeroes read split partial ...passed 00:16:05.333 Test: blockdev reset ...passed 00:16:05.333 Test: blockdev write read 8 blocks ...passed 00:16:05.333 Test: blockdev write read size > 128k ...passed 00:16:05.333 Test: blockdev write read invalid size ...passed 00:16:05.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.333 Test: blockdev write read max offset ...passed 00:16:05.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.333 Test: blockdev writev readv 8 blocks ...passed 00:16:05.333 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.333 Test: blockdev writev readv block ...passed 00:16:05.333 Test: blockdev writev readv size > 128k ...passed 00:16:05.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.333 Test: blockdev comparev and writev ...passed 00:16:05.333 Test: blockdev nvme passthru rw ...passed 00:16:05.333 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.333 Test: blockdev nvme admin passthru ...passed 00:16:05.333 Test: blockdev copy ...passed 00:16:05.333 Suite: bdevio tests on: nvme0n3 00:16:05.333 Test: blockdev write read block ...passed 00:16:05.333 Test: blockdev write zeroes read block ...passed 00:16:05.333 Test: blockdev write zeroes read no split ...passed 00:16:05.333 Test: blockdev write zeroes read split ...passed 00:16:05.333 Test: blockdev write zeroes read split partial ...passed 00:16:05.333 Test: blockdev reset ...passed 00:16:05.333 Test: blockdev write read 8 blocks ...passed 00:16:05.333 Test: blockdev write read size > 128k ...passed 00:16:05.333 Test: blockdev write read invalid size ...passed 00:16:05.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.333 Test: blockdev write read max offset ...passed 00:16:05.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.333 Test: blockdev writev readv 8 blocks ...passed 00:16:05.333 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.333 Test: blockdev writev readv block ...passed 00:16:05.333 Test: blockdev writev readv size > 128k ...passed 00:16:05.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.333 Test: blockdev comparev and writev ...passed 00:16:05.333 Test: blockdev nvme passthru rw ...passed 00:16:05.333 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.333 Test: blockdev nvme admin passthru ...passed 00:16:05.333 Test: blockdev copy ...passed 00:16:05.333 Suite: bdevio tests on: nvme0n2 00:16:05.333 Test: blockdev write read block ...passed 00:16:05.333 Test: blockdev write zeroes read block ...passed 00:16:05.333 Test: blockdev write zeroes read no split ...passed 00:16:05.592 Test: blockdev write zeroes read split ...passed 00:16:05.592 Test: blockdev write zeroes read split partial ...passed 00:16:05.592 Test: blockdev reset ...passed 00:16:05.592 Test: blockdev write read 8 blocks ...passed 00:16:05.592 Test: blockdev write read size > 128k ...passed 00:16:05.592 Test: blockdev write read invalid size ...passed 00:16:05.592 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.592 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.592 Test: blockdev write read max offset ...passed 00:16:05.592 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.592 Test: blockdev writev readv 8 blocks ...passed 00:16:05.592 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.592 Test: blockdev writev readv block ...passed 00:16:05.592 Test: blockdev writev readv size > 128k ...passed 00:16:05.592 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.592 Test: blockdev comparev and writev ...passed 00:16:05.592 Test: blockdev nvme passthru rw ...passed 00:16:05.592 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.592 Test: blockdev nvme admin passthru ...passed 00:16:05.592 Test: blockdev copy ...passed 00:16:05.592 Suite: bdevio tests on: nvme0n1 00:16:05.592 Test: blockdev write read block ...passed 00:16:05.592 Test: blockdev write zeroes read block ...passed 00:16:05.592 Test: blockdev write zeroes read no split ...passed 00:16:05.592 Test: blockdev write zeroes read split ...passed 00:16:05.592 Test: blockdev write zeroes read split partial ...passed 00:16:05.592 Test: blockdev reset ...passed 00:16:05.592 Test: blockdev write read 8 blocks ...passed 00:16:05.592 Test: blockdev write read size > 128k ...passed 00:16:05.592 Test: blockdev write read invalid size ...passed 00:16:05.592 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.592 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.592 Test: blockdev write read max offset ...passed 00:16:05.592 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.592 Test: blockdev writev readv 8 blocks ...passed 00:16:05.592 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.592 Test: blockdev writev readv block ...passed 00:16:05.592 Test: blockdev writev readv size > 128k ...passed 00:16:05.592 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.592 Test: blockdev comparev and writev ...passed 00:16:05.592 Test: blockdev nvme passthru rw ...passed 00:16:05.592 Test: blockdev nvme passthru vendor specific ...passed 00:16:05.592 Test: blockdev nvme admin passthru ...passed 00:16:05.592 Test: blockdev copy ...passed 00:16:05.592 00:16:05.592 Run Summary: Type Total Ran Passed Failed Inactive 00:16:05.592 suites 6 6 n/a 0 0 00:16:05.592 tests 138 138 138 0 0 00:16:05.592 asserts 780 780 780 0 n/a 00:16:05.592 00:16:05.592 Elapsed time = 0.904 seconds 00:16:05.592 0 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72363 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72363 ']' 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72363 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72363 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.592 killing process with pid 72363 00:16:05.592 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.593 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72363' 00:16:05.593 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72363 00:16:05.593 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72363 00:16:06.160 19:14:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:06.160 00:16:06.160 real 0m2.020s 00:16:06.160 user 0m5.138s 00:16:06.160 sys 0m0.290s 00:16:06.160 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.160 19:14:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:06.160 ************************************ 00:16:06.160 END TEST bdev_bounds 00:16:06.160 ************************************ 00:16:06.160 19:14:15 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:06.160 19:14:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:06.160 19:14:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.160 19:14:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.160 ************************************ 00:16:06.160 START TEST bdev_nbd 00:16:06.160 ************************************ 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72419 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72419 /var/tmp/spdk-nbd.sock 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72419 ']' 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:06.160 19:14:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:06.160 [2024-11-27 19:14:15.791629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:06.160 [2024-11-27 19:14:15.791735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.418 [2024-11-27 19:14:15.941845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.418 [2024-11-27 19:14:16.028986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:07.008 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.316 1+0 records in 00:16:07.316 1+0 records out 00:16:07.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472541 s, 8.7 MB/s 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:07.316 19:14:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.578 1+0 records in 00:16:07.578 1+0 records out 00:16:07.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496487 s, 8.2 MB/s 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:07.578 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.840 1+0 records in 00:16:07.840 1+0 records out 00:16:07.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389996 s, 10.5 MB/s 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:07.840 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.100 1+0 records in 00:16:08.100 1+0 records out 00:16:08.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503922 s, 8.1 MB/s 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.100 1+0 records in 00:16:08.100 1+0 records out 00:16:08.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556343 s, 7.4 MB/s 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:08.100 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.361 1+0 records in 00:16:08.361 1+0 records out 00:16:08.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616122 s, 6.6 MB/s 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:08.361 19:14:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd0", 00:16:08.622 "bdev_name": "nvme0n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd1", 00:16:08.622 "bdev_name": "nvme0n2" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd2", 00:16:08.622 "bdev_name": "nvme0n3" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd3", 00:16:08.622 "bdev_name": "nvme1n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd4", 00:16:08.622 "bdev_name": "nvme2n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd5", 00:16:08.622 "bdev_name": "nvme3n1" 00:16:08.622 } 00:16:08.622 ]' 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd0", 00:16:08.622 "bdev_name": "nvme0n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd1", 00:16:08.622 "bdev_name": "nvme0n2" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd2", 00:16:08.622 "bdev_name": "nvme0n3" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd3", 00:16:08.622 "bdev_name": "nvme1n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd4", 00:16:08.622 "bdev_name": "nvme2n1" 00:16:08.622 }, 00:16:08.622 { 00:16:08.622 "nbd_device": "/dev/nbd5", 00:16:08.622 "bdev_name": "nvme3n1" 00:16:08.622 } 00:16:08.622 ]' 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.622 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.883 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.144 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.145 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.145 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.406 19:14:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:09.406 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.406 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.406 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.406 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:09.667 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.928 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:10.188 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:10.449 /dev/nbd0 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.449 1+0 records in 00:16:10.449 1+0 records out 00:16:10.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362252 s, 11.3 MB/s 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:10.449 19:14:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:10.449 /dev/nbd1 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.710 1+0 records in 00:16:10.710 1+0 records out 00:16:10.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324263 s, 12.6 MB/s 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:10.710 /dev/nbd10 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.710 1+0 records in 00:16:10.710 1+0 records out 00:16:10.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417092 s, 9.8 MB/s 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:10.710 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:10.972 /dev/nbd11 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.972 1+0 records in 00:16:10.972 1+0 records out 00:16:10.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477968 s, 8.6 MB/s 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:10.972 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:11.233 /dev/nbd12 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.233 1+0 records in 00:16:11.233 1+0 records out 00:16:11.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105621 s, 3.9 MB/s 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:11.233 19:14:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:11.494 /dev/nbd13 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:11.494 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.495 1+0 records in 00:16:11.495 1+0 records out 00:16:11.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119459 s, 3.4 MB/s 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:11.495 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd0", 00:16:11.756 "bdev_name": "nvme0n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd1", 00:16:11.756 "bdev_name": "nvme0n2" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd10", 00:16:11.756 "bdev_name": "nvme0n3" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd11", 00:16:11.756 "bdev_name": "nvme1n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd12", 00:16:11.756 "bdev_name": "nvme2n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd13", 00:16:11.756 "bdev_name": "nvme3n1" 00:16:11.756 } 00:16:11.756 ]' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd0", 00:16:11.756 "bdev_name": "nvme0n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd1", 00:16:11.756 "bdev_name": "nvme0n2" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd10", 00:16:11.756 "bdev_name": "nvme0n3" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd11", 00:16:11.756 "bdev_name": "nvme1n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd12", 00:16:11.756 "bdev_name": "nvme2n1" 00:16:11.756 }, 00:16:11.756 { 00:16:11.756 "nbd_device": "/dev/nbd13", 00:16:11.756 "bdev_name": "nvme3n1" 00:16:11.756 } 00:16:11.756 ]' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:11.756 /dev/nbd1 00:16:11.756 /dev/nbd10 00:16:11.756 /dev/nbd11 00:16:11.756 /dev/nbd12 00:16:11.756 /dev/nbd13' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:11.756 /dev/nbd1 00:16:11.756 /dev/nbd10 00:16:11.756 /dev/nbd11 00:16:11.756 /dev/nbd12 00:16:11.756 /dev/nbd13' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:11.756 256+0 records in 00:16:11.756 256+0 records out 00:16:11.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111156 s, 94.3 MB/s 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:11.756 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:12.017 256+0 records in 00:16:12.017 256+0 records out 00:16:12.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242436 s, 4.3 MB/s 00:16:12.017 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.017 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:12.279 256+0 records in 00:16:12.279 256+0 records out 00:16:12.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244745 s, 4.3 MB/s 00:16:12.279 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.279 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:12.540 256+0 records in 00:16:12.540 256+0 records out 00:16:12.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13357 s, 7.9 MB/s 00:16:12.540 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.540 19:14:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:12.540 256+0 records in 00:16:12.540 256+0 records out 00:16:12.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0838362 s, 12.5 MB/s 00:16:12.540 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.540 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:12.540 256+0 records in 00:16:12.540 256+0 records out 00:16:12.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132868 s, 7.9 MB/s 00:16:12.540 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:12.540 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:12.802 256+0 records in 00:16:12.802 256+0 records out 00:16:12.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0841618 s, 12.5 MB/s 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.802 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.063 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.325 19:14:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.586 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.847 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.105 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:14.363 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:14.363 malloc_lvol_verify 00:16:14.621 19:14:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:14.621 42db2cf1-0878-498e-902b-215b94e84f4e 00:16:14.621 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:14.878 7eda2ada-d8c4-43b0-a97f-fcbe2dfffb26 00:16:14.878 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:15.136 /dev/nbd0 00:16:15.136 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:15.136 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:15.136 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:15.136 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:15.137 mke2fs 1.47.0 (5-Feb-2023) 00:16:15.137 Discarding device blocks: 0/4096 done 00:16:15.137 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:15.137 00:16:15.137 Allocating group tables: 0/1 done 00:16:15.137 Writing inode tables: 0/1 done 00:16:15.137 Creating journal (1024 blocks): done 00:16:15.137 Writing superblocks and filesystem accounting information: 0/1 done 00:16:15.137 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.137 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72419 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72419 ']' 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72419 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72419 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.395 killing process with pid 72419 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72419' 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72419 00:16:15.395 19:14:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72419 00:16:15.963 19:14:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:15.964 00:16:15.964 real 0m9.761s 00:16:15.964 user 0m13.587s 00:16:15.964 sys 0m3.257s 00:16:15.964 19:14:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.964 19:14:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:15.964 ************************************ 00:16:15.964 END TEST bdev_nbd 00:16:15.964 ************************************ 00:16:15.964 19:14:25 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:15.964 19:14:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:15.964 19:14:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:15.964 19:14:25 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:15.964 19:14:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:15.964 19:14:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.964 19:14:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.964 ************************************ 00:16:15.964 START TEST bdev_fio 00:16:15.964 ************************************ 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:15.964 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:15.964 ************************************ 00:16:15.964 START TEST bdev_fio_rw_verify 00:16:15.964 ************************************ 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:15.964 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:16.225 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:16.225 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:16.225 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:16.225 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:16.225 19:14:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:16.225 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:16.225 fio-3.35 00:16:16.225 Starting 6 threads 00:16:28.458 00:16:28.459 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72823: Wed Nov 27 19:14:36 2024 00:16:28.459 read: IOPS=27.0k, BW=106MiB/s (111MB/s)(1056MiB/10001msec) 00:16:28.459 slat (usec): min=2, max=1810, avg= 5.42, stdev=10.29 00:16:28.459 clat (usec): min=68, max=4554, avg=663.51, stdev=467.92 00:16:28.459 lat (usec): min=73, max=4575, avg=668.93, stdev=468.58 00:16:28.459 clat percentiles (usec): 00:16:28.459 | 50.000th=[ 553], 99.000th=[ 2245], 99.900th=[ 3294], 99.990th=[ 4178], 00:16:28.459 | 99.999th=[ 4424] 00:16:28.459 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(1067MiB/10001msec); 0 zone resets 00:16:28.459 slat (usec): min=13, max=4230, avg=30.28, stdev=85.61 00:16:28.459 clat (usec): min=56, max=6570, avg=842.04, stdev=523.22 00:16:28.459 lat (usec): min=83, max=6601, avg=872.32, stdev=532.07 00:16:28.459 clat percentiles (usec): 00:16:28.459 | 50.000th=[ 725], 99.000th=[ 2573], 99.900th=[ 3621], 99.990th=[ 4883], 00:16:28.459 | 99.999th=[ 6521] 00:16:28.459 bw ( KiB/s): min=77152, max=146799, per=100.00%, avg=109702.00, stdev=3045.81, samples=114 00:16:28.459 iops : min=19288, max=36698, avg=27424.74, stdev=761.47, samples=114 00:16:28.459 lat (usec) : 100=0.07%, 250=11.80%, 500=24.50%, 750=22.88%, 1000=16.25% 00:16:28.459 lat (msec) : 2=21.83%, 4=2.64%, 10=0.03% 00:16:28.459 cpu : usr=40.72%, sys=37.41%, ctx=7522, majf=0, minf=23313 00:16:28.459 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.459 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.459 issued rwts: total=270336,273177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.459 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:28.459 00:16:28.459 Run status group 0 (all jobs): 00:16:28.459 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1056MiB (1107MB), run=10001-10001msec 00:16:28.459 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1067MiB (1119MB), run=10001-10001msec 00:16:28.459 ----------------------------------------------------- 00:16:28.459 Suppressions used: 00:16:28.459 count bytes template 00:16:28.459 6 48 /usr/src/fio/parse.c 00:16:28.459 2665 255840 /usr/src/fio/iolog.c 00:16:28.459 1 8 libtcmalloc_minimal.so 00:16:28.459 1 904 libcrypto.so 00:16:28.459 ----------------------------------------------------- 00:16:28.459 00:16:28.459 00:16:28.459 real 0m11.838s 00:16:28.459 user 0m25.854s 00:16:28.459 sys 0m22.740s 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.459 ************************************ 00:16:28.459 END TEST bdev_fio_rw_verify 00:16:28.459 ************************************ 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8950858a-f74a-4f88-b5b3-8f0c8699e097"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8950858a-f74a-4f88-b5b3-8f0c8699e097",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e269c4de-3689-428c-be7c-405d5c1fdc9e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e269c4de-3689-428c-be7c-405d5c1fdc9e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "72a543c1-9a50-4306-985b-0b09d1340429"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72a543c1-9a50-4306-985b-0b09d1340429",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9f47d458-7877-46f7-9027-f30147e7b1e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9f47d458-7877-46f7-9027-f30147e7b1e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cd3fd121-bd09-4c99-9986-49db85c58f1e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cd3fd121-bd09-4c99-9986-49db85c58f1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "36083079-95bc-415c-864d-b5e6292a99eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "36083079-95bc-415c-864d-b5e6292a99eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:28.459 /home/vagrant/spdk_repo/spdk 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:28.459 00:16:28.459 real 0m11.982s 00:16:28.459 user 0m25.928s 00:16:28.459 sys 0m22.811s 00:16:28.459 ************************************ 00:16:28.459 END TEST bdev_fio 00:16:28.459 ************************************ 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.459 19:14:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 19:14:37 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:28.459 19:14:37 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:28.459 19:14:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:28.459 19:14:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.459 19:14:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 ************************************ 00:16:28.459 START TEST bdev_verify 00:16:28.459 ************************************ 00:16:28.460 19:14:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:28.460 [2024-11-27 19:14:37.610707] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:28.460 [2024-11-27 19:14:37.610814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72997 ] 00:16:28.460 [2024-11-27 19:14:37.770291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.460 [2024-11-27 19:14:37.878573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.460 [2024-11-27 19:14:37.878724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.720 Running I/O for 5 seconds... 00:16:31.049 22112.00 IOPS, 86.38 MiB/s [2024-11-27T19:14:41.630Z] 22736.00 IOPS, 88.81 MiB/s [2024-11-27T19:14:42.620Z] 23040.00 IOPS, 90.00 MiB/s [2024-11-27T19:14:43.564Z] 22896.00 IOPS, 89.44 MiB/s [2024-11-27T19:14:43.564Z] 22457.60 IOPS, 87.72 MiB/s 00:16:33.929 Latency(us) 00:16:33.929 [2024-11-27T19:14:43.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.929 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0x80000 00:16:33.929 nvme0n1 : 5.02 1760.19 6.88 0.00 0.00 72584.57 5671.38 63721.16 00:16:33.929 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x80000 length 0x80000 00:16:33.929 nvme0n1 : 5.03 1602.45 6.26 0.00 0.00 79671.19 7309.78 84289.38 00:16:33.929 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0x80000 00:16:33.929 nvme0n2 : 5.04 1753.72 6.85 0.00 0.00 72701.06 13107.20 59688.17 00:16:33.929 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x80000 length 0x80000 00:16:33.929 nvme0n2 : 5.05 1596.31 6.24 0.00 0.00 79719.16 12401.43 82272.89 00:16:33.929 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0x80000 00:16:33.929 nvme0n3 : 5.07 1767.15 6.90 0.00 0.00 72012.76 8771.74 59688.17 00:16:33.929 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x80000 length 0x80000 00:16:33.929 nvme0n3 : 5.06 1593.31 6.22 0.00 0.00 79587.85 11897.30 77030.01 00:16:33.929 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0x20000 00:16:33.929 nvme1n1 : 5.07 1766.62 6.90 0.00 0.00 71895.82 7461.02 64527.75 00:16:33.929 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x20000 length 0x20000 00:16:33.929 nvme1n1 : 5.09 1609.90 6.29 0.00 0.00 78547.13 6503.19 78239.90 00:16:33.929 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0xbd0bd 00:16:33.929 nvme2n1 : 5.07 2888.09 11.28 0.00 0.00 43838.49 4713.55 64124.46 00:16:33.929 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:33.929 nvme2n1 : 5.09 2635.77 10.30 0.00 0.00 47877.95 4587.52 68157.44 00:16:33.929 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0x0 length 0xa0000 00:16:33.929 nvme3n1 : 5.09 1759.75 6.87 0.00 0.00 71934.13 4915.20 83482.78 00:16:33.929 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:33.929 Verification LBA range: start 0xa0000 length 0xa0000 00:16:33.929 nvme3n1 : 5.09 1433.42 5.60 0.00 0.00 87975.11 4385.87 110503.78 00:16:33.929 [2024-11-27T19:14:43.564Z] =================================================================================================================== 00:16:33.929 [2024-11-27T19:14:43.564Z] Total : 22166.68 86.59 0.00 0.00 68698.37 4385.87 110503.78 00:16:34.868 00:16:34.868 real 0m6.647s 00:16:34.868 user 0m10.716s 00:16:34.868 sys 0m1.548s 00:16:34.868 19:14:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.868 19:14:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:34.868 ************************************ 00:16:34.868 END TEST bdev_verify 00:16:34.868 ************************************ 00:16:34.868 19:14:44 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:34.868 19:14:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:34.868 19:14:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.868 19:14:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.868 ************************************ 00:16:34.868 START TEST bdev_verify_big_io 00:16:34.868 ************************************ 00:16:34.868 19:14:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:34.869 [2024-11-27 19:14:44.302443] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:34.869 [2024-11-27 19:14:44.302552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73091 ] 00:16:34.869 [2024-11-27 19:14:44.461824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.129 [2024-11-27 19:14:44.567277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.129 [2024-11-27 19:14:44.567388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.701 Running I/O for 5 seconds... 00:16:41.830 2004.00 IOPS, 125.25 MiB/s [2024-11-27T19:14:52.037Z] 3054.00 IOPS, 190.88 MiB/s [2024-11-27T19:14:52.037Z] 3453.33 IOPS, 215.83 MiB/s 00:16:42.402 Latency(us) 00:16:42.402 [2024-11-27T19:14:52.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.402 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0x8000 00:16:42.402 nvme0n1 : 5.91 124.57 7.79 0.00 0.00 996591.05 150027.03 1038896.84 00:16:42.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x8000 length 0x8000 00:16:42.402 nvme0n1 : 6.06 84.50 5.28 0.00 0.00 1453415.98 94775.14 2013265.92 00:16:42.402 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0x8000 00:16:42.402 nvme0n2 : 5.76 130.56 8.16 0.00 0.00 930195.03 171805.14 909841.33 00:16:42.402 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x8000 length 0x8000 00:16:42.402 nvme0n2 : 6.01 93.25 5.83 0.00 0.00 1214340.62 3780.92 1593835.52 00:16:42.402 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0x8000 00:16:42.402 nvme0n3 : 5.93 144.37 9.02 0.00 0.00 814771.02 17543.48 777559.43 00:16:42.402 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x8000 length 0x8000 00:16:42.402 nvme0n3 : 6.04 60.91 3.81 0.00 0.00 1751766.87 41943.04 3407065.40 00:16:42.402 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0x2000 00:16:42.402 nvme1n1 : 5.92 145.85 9.12 0.00 0.00 781655.67 9981.64 1303460.63 00:16:42.402 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x2000 length 0x2000 00:16:42.402 nvme1n1 : 6.25 120.24 7.51 0.00 0.00 843397.10 19862.45 2697260.11 00:16:42.402 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0xbd0b 00:16:42.402 nvme2n1 : 5.93 156.43 9.78 0.00 0.00 716272.52 6553.60 1703532.70 00:16:42.402 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:42.402 nvme2n1 : 6.51 174.56 10.91 0.00 0.00 552239.69 6326.74 2335904.69 00:16:42.402 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0x0 length 0xa000 00:16:42.402 nvme3n1 : 5.93 151.16 9.45 0.00 0.00 717670.34 4688.34 845313.58 00:16:42.402 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:42.402 Verification LBA range: start 0xa000 length 0xa000 00:16:42.402 nvme3n1 : 6.78 295.09 18.44 0.00 0.00 311690.19 715.22 2464960.20 00:16:42.402 [2024-11-27T19:14:52.037Z] =================================================================================================================== 00:16:42.402 [2024-11-27T19:14:52.037Z] Total : 1681.49 105.09 0.00 0.00 778698.35 715.22 3407065.40 00:16:43.337 00:16:43.337 real 0m8.402s 00:16:43.337 user 0m15.639s 00:16:43.337 sys 0m0.396s 00:16:43.337 19:14:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.337 19:14:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.337 ************************************ 00:16:43.337 END TEST bdev_verify_big_io 00:16:43.337 ************************************ 00:16:43.337 19:14:52 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:43.337 19:14:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:43.337 19:14:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.337 19:14:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.337 ************************************ 00:16:43.338 START TEST bdev_write_zeroes 00:16:43.338 ************************************ 00:16:43.338 19:14:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:43.338 [2024-11-27 19:14:52.746603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:43.338 [2024-11-27 19:14:52.746719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:16:43.338 [2024-11-27 19:14:52.900706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.596 [2024-11-27 19:14:52.988292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.854 Running I/O for 1 seconds... 00:16:44.793 76672.00 IOPS, 299.50 MiB/s 00:16:44.793 Latency(us) 00:16:44.793 [2024-11-27T19:14:54.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.793 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme0n1 : 1.01 11883.40 46.42 0.00 0.00 10761.37 5671.38 24399.56 00:16:44.793 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme0n2 : 1.01 11742.69 45.87 0.00 0.00 10882.52 4839.58 24097.08 00:16:44.793 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme0n3 : 1.01 11855.07 46.31 0.00 0.00 10772.66 4184.22 23895.43 00:16:44.793 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme1n1 : 1.02 11824.47 46.19 0.00 0.00 10794.25 4940.41 23592.96 00:16:44.793 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme2n1 : 1.02 17382.73 67.90 0.00 0.00 7336.33 3654.89 18450.90 00:16:44.793 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:44.793 nvme3n1 : 1.02 11810.71 46.14 0.00 0.00 10749.73 5671.38 21778.12 00:16:44.793 [2024-11-27T19:14:54.428Z] =================================================================================================================== 00:16:44.793 [2024-11-27T19:14:54.428Z] Total : 76499.08 298.82 0.00 0.00 10006.30 3654.89 24399.56 00:16:45.736 00:16:45.736 real 0m2.415s 00:16:45.736 user 0m1.717s 00:16:45.736 sys 0m0.533s 00:16:45.736 19:14:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.736 19:14:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 ************************************ 00:16:45.736 END TEST bdev_write_zeroes 00:16:45.736 ************************************ 00:16:45.736 19:14:55 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:45.736 19:14:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:45.736 19:14:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.736 19:14:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 ************************************ 00:16:45.736 START TEST bdev_json_nonenclosed 00:16:45.736 ************************************ 00:16:45.736 19:14:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:45.736 [2024-11-27 19:14:55.215360] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:45.736 [2024-11-27 19:14:55.215487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73260 ] 00:16:45.995 [2024-11-27 19:14:55.376163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.995 [2024-11-27 19:14:55.481978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.995 [2024-11-27 19:14:55.482062] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:45.995 [2024-11-27 19:14:55.482081] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:45.995 [2024-11-27 19:14:55.482090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:46.254 00:16:46.254 real 0m0.513s 00:16:46.254 user 0m0.303s 00:16:46.254 sys 0m0.107s 00:16:46.254 19:14:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.254 19:14:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:46.254 ************************************ 00:16:46.254 END TEST bdev_json_nonenclosed 00:16:46.254 ************************************ 00:16:46.254 19:14:55 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.254 19:14:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:46.254 19:14:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.254 19:14:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:46.254 ************************************ 00:16:46.254 START TEST bdev_json_nonarray 00:16:46.254 ************************************ 00:16:46.254 19:14:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.254 [2024-11-27 19:14:55.775709] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:46.254 [2024-11-27 19:14:55.775818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:16:46.514 [2024-11-27 19:14:55.933764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.514 [2024-11-27 19:14:56.038550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.514 [2024-11-27 19:14:56.038647] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:46.514 [2024-11-27 19:14:56.038665] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:46.514 [2024-11-27 19:14:56.038675] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:46.773 00:16:46.773 real 0m0.510s 00:16:46.773 user 0m0.304s 00:16:46.773 sys 0m0.102s 00:16:46.773 19:14:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.773 ************************************ 00:16:46.773 END TEST bdev_json_nonarray 00:16:46.773 ************************************ 00:16:46.773 19:14:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:46.773 19:14:56 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:47.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.240 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:03.628 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:03.628 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:03.890 00:17:03.890 real 1m5.500s 00:17:03.890 user 1m19.872s 00:17:03.890 sys 0m50.689s 00:17:03.890 19:15:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.890 19:15:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.890 ************************************ 00:17:03.890 END TEST blockdev_xnvme 00:17:03.890 ************************************ 00:17:03.890 19:15:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:03.890 19:15:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.891 19:15:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.891 19:15:13 -- common/autotest_common.sh@10 -- # set +x 00:17:03.891 ************************************ 00:17:03.891 START TEST ublk 00:17:03.891 ************************************ 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:03.891 * Looking for test storage... 00:17:03.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.891 19:15:13 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.891 19:15:13 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.891 19:15:13 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.891 19:15:13 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.891 19:15:13 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.891 19:15:13 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:03.891 19:15:13 ublk -- scripts/common.sh@345 -- # : 1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.891 19:15:13 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.891 19:15:13 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@353 -- # local d=1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.891 19:15:13 ublk -- scripts/common.sh@355 -- # echo 1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.891 19:15:13 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@353 -- # local d=2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.891 19:15:13 ublk -- scripts/common.sh@355 -- # echo 2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.891 19:15:13 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.891 19:15:13 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.891 19:15:13 ublk -- scripts/common.sh@368 -- # return 0 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.891 --rc genhtml_branch_coverage=1 00:17:03.891 --rc genhtml_function_coverage=1 00:17:03.891 --rc genhtml_legend=1 00:17:03.891 --rc geninfo_all_blocks=1 00:17:03.891 --rc geninfo_unexecuted_blocks=1 00:17:03.891 00:17:03.891 ' 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.891 --rc genhtml_branch_coverage=1 00:17:03.891 --rc genhtml_function_coverage=1 00:17:03.891 --rc genhtml_legend=1 00:17:03.891 --rc geninfo_all_blocks=1 00:17:03.891 --rc geninfo_unexecuted_blocks=1 00:17:03.891 00:17:03.891 ' 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.891 --rc genhtml_branch_coverage=1 00:17:03.891 --rc genhtml_function_coverage=1 00:17:03.891 --rc genhtml_legend=1 00:17:03.891 --rc geninfo_all_blocks=1 00:17:03.891 --rc geninfo_unexecuted_blocks=1 00:17:03.891 00:17:03.891 ' 00:17:03.891 19:15:13 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.891 --rc genhtml_branch_coverage=1 00:17:03.891 --rc genhtml_function_coverage=1 00:17:03.891 --rc genhtml_legend=1 00:17:03.891 --rc geninfo_all_blocks=1 00:17:03.891 --rc geninfo_unexecuted_blocks=1 00:17:03.891 00:17:03.891 ' 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:03.891 19:15:13 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:03.891 19:15:13 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:03.891 19:15:13 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:03.891 19:15:13 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:03.891 19:15:13 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:03.891 19:15:13 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:03.891 19:15:13 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:03.891 19:15:13 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:03.891 19:15:13 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:04.154 19:15:13 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:04.154 19:15:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.154 19:15:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.154 19:15:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.154 ************************************ 00:17:04.154 START TEST test_save_ublk_config 00:17:04.154 ************************************ 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73599 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73599 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73599 ']' 00:17:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:04.154 19:15:13 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:04.154 [2024-11-27 19:15:13.629461] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:04.154 [2024-11-27 19:15:13.629606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73599 ] 00:17:04.415 [2024-11-27 19:15:13.795007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.415 [2024-11-27 19:15:13.941307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:05.359 [2024-11-27 19:15:14.763162] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.359 [2024-11-27 19:15:14.764160] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.359 malloc0 00:17:05.359 [2024-11-27 19:15:14.843303] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:05.359 [2024-11-27 19:15:14.843415] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:05.359 [2024-11-27 19:15:14.843428] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:05.359 [2024-11-27 19:15:14.843437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.359 [2024-11-27 19:15:14.852951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.359 [2024-11-27 19:15:14.852984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.359 [2024-11-27 19:15:14.860151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.359 [2024-11-27 19:15:14.860307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:05.359 [2024-11-27 19:15:14.877168] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:05.359 0 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.359 19:15:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:05.621 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.621 19:15:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:05.621 "subsystems": [ 00:17:05.621 { 00:17:05.621 "subsystem": "fsdev", 00:17:05.621 "config": [ 00:17:05.621 { 00:17:05.621 "method": "fsdev_set_opts", 00:17:05.621 "params": { 00:17:05.621 "fsdev_io_pool_size": 65535, 00:17:05.621 "fsdev_io_cache_size": 256 00:17:05.621 } 00:17:05.621 } 00:17:05.621 ] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "keyring", 00:17:05.621 "config": [] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "iobuf", 00:17:05.621 "config": [ 00:17:05.621 { 00:17:05.621 "method": "iobuf_set_options", 00:17:05.621 "params": { 00:17:05.621 "small_pool_count": 8192, 00:17:05.621 "large_pool_count": 1024, 00:17:05.621 "small_bufsize": 8192, 00:17:05.621 "large_bufsize": 135168, 00:17:05.621 "enable_numa": false 00:17:05.621 } 00:17:05.621 } 00:17:05.621 ] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "sock", 00:17:05.621 "config": [ 00:17:05.621 { 00:17:05.621 "method": "sock_set_default_impl", 00:17:05.621 "params": { 00:17:05.621 "impl_name": "posix" 00:17:05.621 } 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "method": "sock_impl_set_options", 00:17:05.621 "params": { 00:17:05.621 "impl_name": "ssl", 00:17:05.621 "recv_buf_size": 4096, 00:17:05.621 "send_buf_size": 4096, 00:17:05.621 "enable_recv_pipe": true, 00:17:05.621 "enable_quickack": false, 00:17:05.621 "enable_placement_id": 0, 00:17:05.621 "enable_zerocopy_send_server": true, 00:17:05.621 "enable_zerocopy_send_client": false, 00:17:05.621 "zerocopy_threshold": 0, 00:17:05.621 "tls_version": 0, 00:17:05.621 "enable_ktls": false 00:17:05.621 } 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "method": "sock_impl_set_options", 00:17:05.621 "params": { 00:17:05.621 "impl_name": "posix", 00:17:05.621 "recv_buf_size": 2097152, 00:17:05.621 "send_buf_size": 2097152, 00:17:05.621 "enable_recv_pipe": true, 00:17:05.621 "enable_quickack": false, 00:17:05.621 "enable_placement_id": 0, 00:17:05.621 "enable_zerocopy_send_server": true, 00:17:05.621 "enable_zerocopy_send_client": false, 00:17:05.621 "zerocopy_threshold": 0, 00:17:05.621 "tls_version": 0, 00:17:05.621 "enable_ktls": false 00:17:05.621 } 00:17:05.621 } 00:17:05.621 ] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "vmd", 00:17:05.621 "config": [] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "accel", 00:17:05.621 "config": [ 00:17:05.621 { 00:17:05.621 "method": "accel_set_options", 00:17:05.621 "params": { 00:17:05.621 "small_cache_size": 128, 00:17:05.621 "large_cache_size": 16, 00:17:05.621 "task_count": 2048, 00:17:05.621 "sequence_count": 2048, 00:17:05.621 "buf_count": 2048 00:17:05.621 } 00:17:05.621 } 00:17:05.621 ] 00:17:05.621 }, 00:17:05.621 { 00:17:05.621 "subsystem": "bdev", 00:17:05.621 "config": [ 00:17:05.621 { 00:17:05.621 "method": "bdev_set_options", 00:17:05.621 "params": { 00:17:05.621 "bdev_io_pool_size": 65535, 00:17:05.621 "bdev_io_cache_size": 256, 00:17:05.621 "bdev_auto_examine": true, 00:17:05.622 "iobuf_small_cache_size": 128, 00:17:05.622 "iobuf_large_cache_size": 16 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_raid_set_options", 00:17:05.622 "params": { 00:17:05.622 "process_window_size_kb": 1024, 00:17:05.622 "process_max_bandwidth_mb_sec": 0 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_iscsi_set_options", 00:17:05.622 "params": { 00:17:05.622 "timeout_sec": 30 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_nvme_set_options", 00:17:05.622 "params": { 00:17:05.622 "action_on_timeout": "none", 00:17:05.622 "timeout_us": 0, 00:17:05.622 "timeout_admin_us": 0, 00:17:05.622 "keep_alive_timeout_ms": 10000, 00:17:05.622 "arbitration_burst": 0, 00:17:05.622 "low_priority_weight": 0, 00:17:05.622 "medium_priority_weight": 0, 00:17:05.622 "high_priority_weight": 0, 00:17:05.622 "nvme_adminq_poll_period_us": 10000, 00:17:05.622 "nvme_ioq_poll_period_us": 0, 00:17:05.622 "io_queue_requests": 0, 00:17:05.622 "delay_cmd_submit": true, 00:17:05.622 "transport_retry_count": 4, 00:17:05.622 "bdev_retry_count": 3, 00:17:05.622 "transport_ack_timeout": 0, 00:17:05.622 "ctrlr_loss_timeout_sec": 0, 00:17:05.622 "reconnect_delay_sec": 0, 00:17:05.622 "fast_io_fail_timeout_sec": 0, 00:17:05.622 "disable_auto_failback": false, 00:17:05.622 "generate_uuids": false, 00:17:05.622 "transport_tos": 0, 00:17:05.622 "nvme_error_stat": false, 00:17:05.622 "rdma_srq_size": 0, 00:17:05.622 "io_path_stat": false, 00:17:05.622 "allow_accel_sequence": false, 00:17:05.622 "rdma_max_cq_size": 0, 00:17:05.622 "rdma_cm_event_timeout_ms": 0, 00:17:05.622 "dhchap_digests": [ 00:17:05.622 "sha256", 00:17:05.622 "sha384", 00:17:05.622 "sha512" 00:17:05.622 ], 00:17:05.622 "dhchap_dhgroups": [ 00:17:05.622 "null", 00:17:05.622 "ffdhe2048", 00:17:05.622 "ffdhe3072", 00:17:05.622 "ffdhe4096", 00:17:05.622 "ffdhe6144", 00:17:05.622 "ffdhe8192" 00:17:05.622 ] 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_nvme_set_hotplug", 00:17:05.622 "params": { 00:17:05.622 "period_us": 100000, 00:17:05.622 "enable": false 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_malloc_create", 00:17:05.622 "params": { 00:17:05.622 "name": "malloc0", 00:17:05.622 "num_blocks": 8192, 00:17:05.622 "block_size": 4096, 00:17:05.622 "physical_block_size": 4096, 00:17:05.622 "uuid": "8dadb7dc-ff4b-46c0-9276-da71d88215a9", 00:17:05.622 "optimal_io_boundary": 0, 00:17:05.622 "md_size": 0, 00:17:05.622 "dif_type": 0, 00:17:05.622 "dif_is_head_of_md": false, 00:17:05.622 "dif_pi_format": 0 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "bdev_wait_for_examine" 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "scsi", 00:17:05.622 "config": null 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "scheduler", 00:17:05.622 "config": [ 00:17:05.622 { 00:17:05.622 "method": "framework_set_scheduler", 00:17:05.622 "params": { 00:17:05.622 "name": "static" 00:17:05.622 } 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "vhost_scsi", 00:17:05.622 "config": [] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "vhost_blk", 00:17:05.622 "config": [] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "ublk", 00:17:05.622 "config": [ 00:17:05.622 { 00:17:05.622 "method": "ublk_create_target", 00:17:05.622 "params": { 00:17:05.622 "cpumask": "1" 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "ublk_start_disk", 00:17:05.622 "params": { 00:17:05.622 "bdev_name": "malloc0", 00:17:05.622 "ublk_id": 0, 00:17:05.622 "num_queues": 1, 00:17:05.622 "queue_depth": 128 00:17:05.622 } 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "nbd", 00:17:05.622 "config": [] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "nvmf", 00:17:05.622 "config": [ 00:17:05.622 { 00:17:05.622 "method": "nvmf_set_config", 00:17:05.622 "params": { 00:17:05.622 "discovery_filter": "match_any", 00:17:05.622 "admin_cmd_passthru": { 00:17:05.622 "identify_ctrlr": false 00:17:05.622 }, 00:17:05.622 "dhchap_digests": [ 00:17:05.622 "sha256", 00:17:05.622 "sha384", 00:17:05.622 "sha512" 00:17:05.622 ], 00:17:05.622 "dhchap_dhgroups": [ 00:17:05.622 "null", 00:17:05.622 "ffdhe2048", 00:17:05.622 "ffdhe3072", 00:17:05.622 "ffdhe4096", 00:17:05.622 "ffdhe6144", 00:17:05.622 "ffdhe8192" 00:17:05.622 ] 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "nvmf_set_max_subsystems", 00:17:05.622 "params": { 00:17:05.622 "max_subsystems": 1024 00:17:05.622 } 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "method": "nvmf_set_crdt", 00:17:05.622 "params": { 00:17:05.622 "crdt1": 0, 00:17:05.622 "crdt2": 0, 00:17:05.622 "crdt3": 0 00:17:05.622 } 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 }, 00:17:05.622 { 00:17:05.622 "subsystem": "iscsi", 00:17:05.622 "config": [ 00:17:05.622 { 00:17:05.622 "method": "iscsi_set_options", 00:17:05.622 "params": { 00:17:05.622 "node_base": "iqn.2016-06.io.spdk", 00:17:05.622 "max_sessions": 128, 00:17:05.622 "max_connections_per_session": 2, 00:17:05.622 "max_queue_depth": 64, 00:17:05.622 "default_time2wait": 2, 00:17:05.622 "default_time2retain": 20, 00:17:05.622 "first_burst_length": 8192, 00:17:05.622 "immediate_data": true, 00:17:05.622 "allow_duplicated_isid": false, 00:17:05.622 "error_recovery_level": 0, 00:17:05.622 "nop_timeout": 60, 00:17:05.622 "nop_in_interval": 30, 00:17:05.622 "disable_chap": false, 00:17:05.622 "require_chap": false, 00:17:05.622 "mutual_chap": false, 00:17:05.622 "chap_group": 0, 00:17:05.622 "max_large_datain_per_connection": 64, 00:17:05.622 "max_r2t_per_connection": 4, 00:17:05.622 "pdu_pool_size": 36864, 00:17:05.622 "immediate_data_pool_size": 16384, 00:17:05.622 "data_out_pool_size": 2048 00:17:05.622 } 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 } 00:17:05.622 ] 00:17:05.622 }' 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73599 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73599 ']' 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73599 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73599 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73599' 00:17:05.622 killing process with pid 73599 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73599 00:17:05.622 19:15:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73599 00:17:07.009 [2024-11-27 19:15:16.372897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.009 [2024-11-27 19:15:16.409297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.009 [2024-11-27 19:15:16.409454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.009 [2024-11-27 19:15:16.417177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.009 [2024-11-27 19:15:16.417248] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:07.009 [2024-11-27 19:15:16.417265] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:07.009 [2024-11-27 19:15:16.417296] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:07.009 [2024-11-27 19:15:16.417471] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73659 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73659 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73659 ']' 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:08.397 19:15:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:08.397 "subsystems": [ 00:17:08.397 { 00:17:08.397 "subsystem": "fsdev", 00:17:08.397 "config": [ 00:17:08.397 { 00:17:08.397 "method": "fsdev_set_opts", 00:17:08.397 "params": { 00:17:08.397 "fsdev_io_pool_size": 65535, 00:17:08.397 "fsdev_io_cache_size": 256 00:17:08.397 } 00:17:08.397 } 00:17:08.397 ] 00:17:08.397 }, 00:17:08.397 { 00:17:08.397 "subsystem": "keyring", 00:17:08.397 "config": [] 00:17:08.397 }, 00:17:08.397 { 00:17:08.397 "subsystem": "iobuf", 00:17:08.397 "config": [ 00:17:08.397 { 00:17:08.397 "method": "iobuf_set_options", 00:17:08.397 "params": { 00:17:08.397 "small_pool_count": 8192, 00:17:08.397 "large_pool_count": 1024, 00:17:08.397 "small_bufsize": 8192, 00:17:08.397 "large_bufsize": 135168, 00:17:08.397 "enable_numa": false 00:17:08.397 } 00:17:08.397 } 00:17:08.397 ] 00:17:08.397 }, 00:17:08.397 { 00:17:08.397 "subsystem": "sock", 00:17:08.397 "config": [ 00:17:08.397 { 00:17:08.397 "method": "sock_set_default_impl", 00:17:08.397 "params": { 00:17:08.397 "impl_name": "posix" 00:17:08.397 } 00:17:08.397 }, 00:17:08.397 { 00:17:08.397 "method": "sock_impl_set_options", 00:17:08.397 "params": { 00:17:08.397 "impl_name": "ssl", 00:17:08.397 "recv_buf_size": 4096, 00:17:08.397 "send_buf_size": 4096, 00:17:08.397 "enable_recv_pipe": true, 00:17:08.397 "enable_quickack": false, 00:17:08.397 "enable_placement_id": 0, 00:17:08.397 "enable_zerocopy_send_server": true, 00:17:08.397 "enable_zerocopy_send_client": false, 00:17:08.397 "zerocopy_threshold": 0, 00:17:08.397 "tls_version": 0, 00:17:08.397 "enable_ktls": false 00:17:08.397 } 00:17:08.397 }, 00:17:08.397 { 00:17:08.397 "method": "sock_impl_set_options", 00:17:08.397 "params": { 00:17:08.397 "impl_name": "posix", 00:17:08.397 "recv_buf_size": 2097152, 00:17:08.397 "send_buf_size": 2097152, 00:17:08.397 "enable_recv_pipe": true, 00:17:08.397 "enable_quickack": false, 00:17:08.397 "enable_placement_id": 0, 00:17:08.397 "enable_zerocopy_send_server": true, 00:17:08.397 "enable_zerocopy_send_client": false, 00:17:08.398 "zerocopy_threshold": 0, 00:17:08.398 "tls_version": 0, 00:17:08.398 "enable_ktls": false 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "vmd", 00:17:08.398 "config": [] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "accel", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "accel_set_options", 00:17:08.398 "params": { 00:17:08.398 "small_cache_size": 128, 00:17:08.398 "large_cache_size": 16, 00:17:08.398 "task_count": 2048, 00:17:08.398 "sequence_count": 2048, 00:17:08.398 "buf_count": 2048 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "bdev", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "bdev_set_options", 00:17:08.398 "params": { 00:17:08.398 "bdev_io_pool_size": 65535, 00:17:08.398 "bdev_io_cache_size": 256, 00:17:08.398 "bdev_auto_examine": true, 00:17:08.398 "iobuf_small_cache_size": 128, 00:17:08.398 "iobuf_large_cache_size": 16 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_raid_set_options", 00:17:08.398 "params": { 00:17:08.398 "process_window_size_kb": 1024, 00:17:08.398 "process_max_bandwidth_mb_sec": 0 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_iscsi_set_options", 00:17:08.398 "params": { 00:17:08.398 "timeout_sec": 30 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_nvme_set_options", 00:17:08.398 "params": { 00:17:08.398 "action_on_timeout": "none", 00:17:08.398 "timeout_us": 0, 00:17:08.398 "timeout_admin_us": 0, 00:17:08.398 "keep_alive_timeout_ms": 10000, 00:17:08.398 "arbitration_burst": 0, 00:17:08.398 "low_priority_weight": 0, 00:17:08.398 "medium_priority_weight": 0, 00:17:08.398 "high_priority_weight": 0, 00:17:08.398 "nvme_adminq_poll_period_us": 10000, 00:17:08.398 "nvme_ioq_poll_period_us": 0, 00:17:08.398 "io_queue_requests": 0, 00:17:08.398 "delay_cmd_submit": true, 00:17:08.398 "transport_retry_count": 4, 00:17:08.398 "bdev_retry_count": 3, 00:17:08.398 "transport_ack_timeout": 0, 00:17:08.398 "ctrlr_loss_timeout_sec": 0, 00:17:08.398 "reconnect_delay_sec": 0, 00:17:08.398 "fast_io_fail_timeout_sec": 0, 00:17:08.398 "disable_auto_failback": false, 00:17:08.398 "generate_uuids": false, 00:17:08.398 "transport_tos": 0, 00:17:08.398 "nvme_error_stat": false, 00:17:08.398 "rdma_srq_size": 0, 00:17:08.398 "io_path_stat": false, 00:17:08.398 "allow_accel_sequence": false, 00:17:08.398 "rdma_max_cq_size": 0, 00:17:08.398 "rdma_cm_event_timeout_ms": 0, 00:17:08.398 "dhchap_digests": [ 00:17:08.398 "sha256", 00:17:08.398 "sha384", 00:17:08.398 "sha512" 00:17:08.398 ], 00:17:08.398 "dhchap_dhgroups": [ 00:17:08.398 "null", 00:17:08.398 "ffdhe2048", 00:17:08.398 "ffdhe3072", 00:17:08.398 "ffdhe4096", 00:17:08.398 "ffdhe6144", 00:17:08.398 "ffdhe8192" 00:17:08.398 ] 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_nvme_set_hotplug", 00:17:08.398 "params": { 00:17:08.398 "period_us": 100000, 00:17:08.398 "enable": false 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_malloc_create", 00:17:08.398 "params": { 00:17:08.398 "name": "malloc0", 00:17:08.398 "num_blocks": 8192, 00:17:08.398 "block_size": 4096, 00:17:08.398 "physical_block_size": 4096, 00:17:08.398 "uuid": "8dadb7dc-ff4b-46c0-9276-da71d88215a9", 00:17:08.398 "optimal_io_boundary": 0, 00:17:08.398 "md_size": 0, 00:17:08.398 "dif_type": 0, 00:17:08.398 "dif_is_head_of_md": false, 00:17:08.398 "dif_pi_format": 0 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "bdev_wait_for_examine" 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "scsi", 00:17:08.398 "config": null 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "scheduler", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "framework_set_scheduler", 00:17:08.398 "params": { 00:17:08.398 "name": "static" 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "vhost_scsi", 00:17:08.398 "config": [] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "vhost_blk", 00:17:08.398 "config": [] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "ublk", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "ublk_create_target", 00:17:08.398 "params": { 00:17:08.398 "cpumask": "1" 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "ublk_start_disk", 00:17:08.398 "params": { 00:17:08.398 "bdev_name": "malloc0", 00:17:08.398 "ublk_id": 0, 00:17:08.398 "num_queues": 1, 00:17:08.398 "queue_depth": 128 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "nbd", 00:17:08.398 "config": [] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "nvmf", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "nvmf_set_config", 00:17:08.398 "params": { 00:17:08.398 "discovery_filter": "match_any", 00:17:08.398 "admin_cmd_passthru": { 00:17:08.398 "identify_ctrlr": false 00:17:08.398 }, 00:17:08.398 "dhchap_digests": [ 00:17:08.398 "sha256", 00:17:08.398 "sha384", 00:17:08.398 "sha512" 00:17:08.398 ], 00:17:08.398 "dhchap_dhgroups": [ 00:17:08.398 "null", 00:17:08.398 "ffdhe2048", 00:17:08.398 "ffdhe3072", 00:17:08.398 "ffdhe4096", 00:17:08.398 "ffdhe6144", 00:17:08.398 "ffdhe8192" 00:17:08.398 ] 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "nvmf_set_max_subsystems", 00:17:08.398 "params": { 00:17:08.398 "max_subsystems": 1024 00:17:08.398 } 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "method": "nvmf_set_crdt", 00:17:08.398 "params": { 00:17:08.398 "crdt1": 0, 00:17:08.398 "crdt2": 0, 00:17:08.398 "crdt3": 0 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }, 00:17:08.398 { 00:17:08.398 "subsystem": "iscsi", 00:17:08.398 "config": [ 00:17:08.398 { 00:17:08.398 "method": "iscsi_set_options", 00:17:08.398 "params": { 00:17:08.398 "node_base": "iqn.2016-06.io.spdk", 00:17:08.398 "max_sessions": 128, 00:17:08.398 "max_connections_per_session": 2, 00:17:08.398 "max_queue_depth": 64, 00:17:08.398 "default_time2wait": 2, 00:17:08.398 "default_time2retain": 20, 00:17:08.398 "first_burst_length": 8192, 00:17:08.398 "immediate_data": true, 00:17:08.398 "allow_duplicated_isid": false, 00:17:08.398 "error_recovery_level": 0, 00:17:08.398 "nop_timeout": 60, 00:17:08.398 "nop_in_interval": 30, 00:17:08.398 "disable_chap": false, 00:17:08.398 "require_chap": false, 00:17:08.398 "mutual_chap": false, 00:17:08.398 "chap_group": 0, 00:17:08.398 "max_large_datain_per_connection": 64, 00:17:08.398 "max_r2t_per_connection": 4, 00:17:08.398 "pdu_pool_size": 36864, 00:17:08.398 "immediate_data_pool_size": 16384, 00:17:08.398 "data_out_pool_size": 2048 00:17:08.398 } 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 } 00:17:08.398 ] 00:17:08.398 }' 00:17:08.657 [2024-11-27 19:15:18.091681] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:08.657 [2024-11-27 19:15:18.092801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73659 ] 00:17:08.657 [2024-11-27 19:15:18.256661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.915 [2024-11-27 19:15:18.363656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.482 [2024-11-27 19:15:19.071143] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:09.482 [2024-11-27 19:15:19.071844] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:09.482 [2024-11-27 19:15:19.079238] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:09.482 [2024-11-27 19:15:19.079296] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:09.482 [2024-11-27 19:15:19.079305] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:09.482 [2024-11-27 19:15:19.079311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:09.482 [2024-11-27 19:15:19.088213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:09.482 [2024-11-27 19:15:19.088232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:09.482 [2024-11-27 19:15:19.095151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:09.482 [2024-11-27 19:15:19.095230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:09.482 [2024-11-27 19:15:19.112145] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73659 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73659 ']' 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73659 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73659 00:17:09.741 killing process with pid 73659 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73659' 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73659 00:17:09.741 19:15:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73659 00:17:10.678 [2024-11-27 19:15:20.233582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:10.678 [2024-11-27 19:15:20.264161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:10.678 [2024-11-27 19:15:20.264262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:10.678 [2024-11-27 19:15:20.273159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:10.678 [2024-11-27 19:15:20.277163] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:10.678 [2024-11-27 19:15:20.277173] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:10.678 [2024-11-27 19:15:20.277208] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:10.678 [2024-11-27 19:15:20.277327] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:12.055 19:15:21 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:12.055 00:17:12.055 real 0m7.959s 00:17:12.055 user 0m5.487s 00:17:12.055 sys 0m3.100s 00:17:12.055 ************************************ 00:17:12.055 END TEST test_save_ublk_config 00:17:12.055 ************************************ 00:17:12.055 19:15:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.055 19:15:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:12.055 19:15:21 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73732 00:17:12.055 19:15:21 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.055 19:15:21 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73732 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@835 -- # '[' -z 73732 ']' 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.055 19:15:21 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.055 19:15:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.055 [2024-11-27 19:15:21.625433] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:12.055 [2024-11-27 19:15:21.625695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73732 ] 00:17:12.314 [2024-11-27 19:15:21.789495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:12.314 [2024-11-27 19:15:21.884649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.314 [2024-11-27 19:15:21.884706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.880 19:15:22 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.880 19:15:22 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:12.880 19:15:22 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:12.880 19:15:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:12.880 19:15:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.880 19:15:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.880 ************************************ 00:17:12.880 START TEST test_create_ublk 00:17:12.880 ************************************ 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:12.880 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.880 [2024-11-27 19:15:22.460151] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:12.880 [2024-11-27 19:15:22.461838] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.880 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:12.880 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.880 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.138 [2024-11-27 19:15:22.643260] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:13.138 [2024-11-27 19:15:22.643594] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:13.138 [2024-11-27 19:15:22.643617] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:13.138 [2024-11-27 19:15:22.643623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.138 [2024-11-27 19:15:22.652372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.138 [2024-11-27 19:15:22.652392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.138 [2024-11-27 19:15:22.659148] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.138 [2024-11-27 19:15:22.659692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:13.138 [2024-11-27 19:15:22.682157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.138 19:15:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:13.138 { 00:17:13.138 "ublk_device": "/dev/ublkb0", 00:17:13.138 "id": 0, 00:17:13.138 "queue_depth": 512, 00:17:13.138 "num_queues": 4, 00:17:13.138 "bdev_name": "Malloc0" 00:17:13.138 } 00:17:13.138 ]' 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:13.138 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:13.396 19:15:22 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:13.396 19:15:22 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:13.396 fio: verification read phase will never start because write phase uses all of runtime 00:17:13.396 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:13.396 fio-3.35 00:17:13.396 Starting 1 process 00:17:25.627 00:17:25.627 fio_test: (groupid=0, jobs=1): err= 0: pid=73772: Wed Nov 27 19:15:33 2024 00:17:25.627 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(534MiB/10001msec); 0 zone resets 00:17:25.627 clat (usec): min=42, max=10276, avg=72.30, stdev=137.44 00:17:25.627 lat (usec): min=43, max=10296, avg=72.79, stdev=137.49 00:17:25.627 clat percentiles (usec): 00:17:25.627 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:17:25.627 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 68], 00:17:25.627 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 76], 95.00th=[ 80], 00:17:25.627 | 99.00th=[ 198], 99.50th=[ 255], 99.90th=[ 2999], 99.95th=[ 3523], 00:17:25.627 | 99.99th=[ 4080] 00:17:25.627 bw ( KiB/s): min=21504, max=65912, per=99.80%, avg=54560.74, stdev=9436.70, samples=19 00:17:25.627 iops : min= 5376, max=16478, avg=13640.11, stdev=2359.14, samples=19 00:17:25.627 lat (usec) : 50=1.88%, 100=96.66%, 250=0.92%, 500=0.31%, 750=0.01% 00:17:25.627 lat (usec) : 1000=0.01% 00:17:25.627 lat (msec) : 2=0.05%, 4=0.14%, 10=0.02%, 20=0.01% 00:17:25.627 cpu : usr=2.15%, sys=13.55%, ctx=136696, majf=0, minf=797 00:17:25.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.627 issued rwts: total=0,136689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.627 00:17:25.627 Run status group 0 (all jobs): 00:17:25.627 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=534MiB (560MB), run=10001-10001msec 00:17:25.627 00:17:25.627 Disk stats (read/write): 00:17:25.627 ublkb0: ios=0/135353, merge=0/0, ticks=0/8042, in_queue=8043, util=99.09% 00:17:25.627 19:15:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.627 [2024-11-27 19:15:33.113310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:25.627 [2024-11-27 19:15:33.154649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:25.627 [2024-11-27 19:15:33.155652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:25.627 [2024-11-27 19:15:33.161170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:25.627 [2024-11-27 19:15:33.161409] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:25.627 [2024-11-27 19:15:33.161419] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.627 19:15:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.627 [2024-11-27 19:15:33.185204] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:25.627 request: 00:17:25.627 { 00:17:25.627 "ublk_id": 0, 00:17:25.627 "method": "ublk_stop_disk", 00:17:25.627 "req_id": 1 00:17:25.627 } 00:17:25.627 Got JSON-RPC error response 00:17:25.627 response: 00:17:25.627 { 00:17:25.627 "code": -19, 00:17:25.627 "message": "No such device" 00:17:25.627 } 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.627 19:15:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.627 [2024-11-27 19:15:33.201202] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:25.627 [2024-11-27 19:15:33.209138] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:25.627 [2024-11-27 19:15:33.209171] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:25.627 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:25.628 ************************************ 00:17:25.628 END TEST test_create_ublk 00:17:25.628 ************************************ 00:17:25.628 19:15:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:25.628 00:17:25.628 real 0m11.223s 00:17:25.628 user 0m0.534s 00:17:25.628 sys 0m1.427s 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:25.628 19:15:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:25.628 19:15:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.628 19:15:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 ************************************ 00:17:25.628 START TEST test_create_multi_ublk 00:17:25.628 ************************************ 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 [2024-11-27 19:15:33.725141] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:25.628 [2024-11-27 19:15:33.726872] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 [2024-11-27 19:15:33.965260] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:25.628 [2024-11-27 19:15:33.965595] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:25.628 [2024-11-27 19:15:33.965608] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:25.628 [2024-11-27 19:15:33.965617] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.628 [2024-11-27 19:15:33.989156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.628 [2024-11-27 19:15:33.989178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.628 [2024-11-27 19:15:34.001150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.628 [2024-11-27 19:15:34.001700] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:25.628 [2024-11-27 19:15:34.041155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 [2024-11-27 19:15:34.305243] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:25.628 [2024-11-27 19:15:34.305563] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:25.628 [2024-11-27 19:15:34.305576] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:25.628 [2024-11-27 19:15:34.305582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.628 [2024-11-27 19:15:34.317166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.628 [2024-11-27 19:15:34.317182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.628 [2024-11-27 19:15:34.329154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.628 [2024-11-27 19:15:34.329693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:25.628 [2024-11-27 19:15:34.365153] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 [2024-11-27 19:15:34.629233] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:25.628 [2024-11-27 19:15:34.629559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:25.628 [2024-11-27 19:15:34.629571] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:25.628 [2024-11-27 19:15:34.629578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.628 [2024-11-27 19:15:34.641161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.628 [2024-11-27 19:15:34.641183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.628 [2024-11-27 19:15:34.653143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.628 [2024-11-27 19:15:34.653699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:25.628 [2024-11-27 19:15:34.685152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.628 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.628 [2024-11-27 19:15:34.937255] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:25.628 [2024-11-27 19:15:34.937568] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:25.628 [2024-11-27 19:15:34.937581] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:25.628 [2024-11-27 19:15:34.937586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:25.628 [2024-11-27 19:15:34.949167] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:25.628 [2024-11-27 19:15:34.949183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:25.628 [2024-11-27 19:15:34.961156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:25.629 [2024-11-27 19:15:34.961705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:25.629 [2024-11-27 19:15:34.974187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:25.629 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.629 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:25.629 19:15:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:25.629 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.629 19:15:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:25.629 { 00:17:25.629 "ublk_device": "/dev/ublkb0", 00:17:25.629 "id": 0, 00:17:25.629 "queue_depth": 512, 00:17:25.629 "num_queues": 4, 00:17:25.629 "bdev_name": "Malloc0" 00:17:25.629 }, 00:17:25.629 { 00:17:25.629 "ublk_device": "/dev/ublkb1", 00:17:25.629 "id": 1, 00:17:25.629 "queue_depth": 512, 00:17:25.629 "num_queues": 4, 00:17:25.629 "bdev_name": "Malloc1" 00:17:25.629 }, 00:17:25.629 { 00:17:25.629 "ublk_device": "/dev/ublkb2", 00:17:25.629 "id": 2, 00:17:25.629 "queue_depth": 512, 00:17:25.629 "num_queues": 4, 00:17:25.629 "bdev_name": "Malloc2" 00:17:25.629 }, 00:17:25.629 { 00:17:25.629 "ublk_device": "/dev/ublkb3", 00:17:25.629 "id": 3, 00:17:25.629 "queue_depth": 512, 00:17:25.629 "num_queues": 4, 00:17:25.629 "bdev_name": "Malloc3" 00:17:25.629 } 00:17:25.629 ]' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:25.629 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:25.887 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:25.888 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:25.888 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:25.888 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 [2024-11-27 19:15:35.669220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.146 [2024-11-27 19:15:35.711718] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.146 [2024-11-27 19:15:35.712884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.146 [2024-11-27 19:15:35.717150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.146 [2024-11-27 19:15:35.717385] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:26.146 [2024-11-27 19:15:35.717398] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.146 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 [2024-11-27 19:15:35.735194] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.146 [2024-11-27 19:15:35.771145] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.146 [2024-11-27 19:15:35.772021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.404 [2024-11-27 19:15:35.781152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.404 [2024-11-27 19:15:35.781439] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:26.404 [2024-11-27 19:15:35.781455] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 [2024-11-27 19:15:35.793230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.404 [2024-11-27 19:15:35.827690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.404 [2024-11-27 19:15:35.828747] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.404 [2024-11-27 19:15:35.837169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.404 [2024-11-27 19:15:35.837418] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:26.404 [2024-11-27 19:15:35.837431] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 [2024-11-27 19:15:35.861205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:26.404 [2024-11-27 19:15:35.894666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:26.404 [2024-11-27 19:15:35.895664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:26.404 [2024-11-27 19:15:35.909150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:26.404 [2024-11-27 19:15:35.909371] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:26.404 [2024-11-27 19:15:35.909383] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 19:15:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:26.663 [2024-11-27 19:15:36.105189] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:26.663 [2024-11-27 19:15:36.113138] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:26.663 [2024-11-27 19:15:36.113165] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:26.663 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:26.663 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.663 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:26.663 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.663 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:26.921 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.921 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:26.921 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:26.921 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.921 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.488 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.488 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.488 19:15:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:27.488 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.488 19:15:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.488 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.488 19:15:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:27.488 19:15:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:27.488 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.488 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:27.746 ************************************ 00:17:27.746 END TEST test_create_multi_ublk 00:17:27.746 ************************************ 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:27.746 00:17:27.746 real 0m3.637s 00:17:27.746 user 0m0.814s 00:17:27.746 sys 0m0.164s 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.746 19:15:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:28.063 19:15:37 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:28.063 19:15:37 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:28.063 19:15:37 ublk -- ublk/ublk.sh@130 -- # killprocess 73732 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@954 -- # '[' -z 73732 ']' 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@958 -- # kill -0 73732 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@959 -- # uname 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73732 00:17:28.063 killing process with pid 73732 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73732' 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@973 -- # kill 73732 00:17:28.063 19:15:37 ublk -- common/autotest_common.sh@978 -- # wait 73732 00:17:28.346 [2024-11-27 19:15:37.979669] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:28.346 [2024-11-27 19:15:37.979716] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:29.280 00:17:29.280 real 0m25.309s 00:17:29.280 user 0m35.590s 00:17:29.280 sys 0m10.055s 00:17:29.280 19:15:38 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.280 ************************************ 00:17:29.280 END TEST ublk 00:17:29.280 ************************************ 00:17:29.280 19:15:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:29.280 19:15:38 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:29.280 19:15:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.280 19:15:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.280 19:15:38 -- common/autotest_common.sh@10 -- # set +x 00:17:29.280 ************************************ 00:17:29.280 START TEST ublk_recovery 00:17:29.280 ************************************ 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:29.280 * Looking for test storage... 00:17:29.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.280 19:15:38 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.280 --rc genhtml_branch_coverage=1 00:17:29.280 --rc genhtml_function_coverage=1 00:17:29.280 --rc genhtml_legend=1 00:17:29.280 --rc geninfo_all_blocks=1 00:17:29.280 --rc geninfo_unexecuted_blocks=1 00:17:29.280 00:17:29.280 ' 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.280 --rc genhtml_branch_coverage=1 00:17:29.280 --rc genhtml_function_coverage=1 00:17:29.280 --rc genhtml_legend=1 00:17:29.280 --rc geninfo_all_blocks=1 00:17:29.280 --rc geninfo_unexecuted_blocks=1 00:17:29.280 00:17:29.280 ' 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.280 --rc genhtml_branch_coverage=1 00:17:29.280 --rc genhtml_function_coverage=1 00:17:29.280 --rc genhtml_legend=1 00:17:29.280 --rc geninfo_all_blocks=1 00:17:29.280 --rc geninfo_unexecuted_blocks=1 00:17:29.280 00:17:29.280 ' 00:17:29.280 19:15:38 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.280 --rc genhtml_branch_coverage=1 00:17:29.280 --rc genhtml_function_coverage=1 00:17:29.280 --rc genhtml_legend=1 00:17:29.280 --rc geninfo_all_blocks=1 00:17:29.280 --rc geninfo_unexecuted_blocks=1 00:17:29.280 00:17:29.280 ' 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:29.281 19:15:38 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:29.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74131 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74131 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74131 ']' 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.281 19:15:38 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.281 19:15:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 [2024-11-27 19:15:38.942842] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:29.539 [2024-11-27 19:15:38.942958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74131 ] 00:17:29.539 [2024-11-27 19:15:39.097402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.797 [2024-11-27 19:15:39.188661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.797 [2024-11-27 19:15:39.188765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:30.364 19:15:39 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 [2024-11-27 19:15:39.736152] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:30.364 [2024-11-27 19:15:39.737846] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 19:15:39 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 malloc0 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 19:15:39 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.364 [2024-11-27 19:15:39.824306] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:30.364 [2024-11-27 19:15:39.824396] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:30.364 [2024-11-27 19:15:39.824406] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:30.364 [2024-11-27 19:15:39.824413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:30.364 [2024-11-27 19:15:39.833245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:30.364 [2024-11-27 19:15:39.833264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:30.364 [2024-11-27 19:15:39.840157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:30.364 [2024-11-27 19:15:39.840283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:30.364 [2024-11-27 19:15:39.855154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:30.364 1 00:17:30.364 19:15:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.364 19:15:39 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:31.298 19:15:40 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74165 00:17:31.298 19:15:40 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:31.298 19:15:40 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:31.556 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:31.556 fio-3.35 00:17:31.556 Starting 1 process 00:17:36.823 19:15:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74131 00:17:36.823 19:15:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:42.103 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74131 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:42.103 19:15:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74271 00:17:42.103 19:15:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.103 19:15:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74271 00:17:42.103 19:15:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74271 ']' 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.103 19:15:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.103 [2024-11-27 19:15:50.947552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:42.103 [2024-11-27 19:15:50.947656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74271 ] 00:17:42.103 [2024-11-27 19:15:51.099416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.103 [2024-11-27 19:15:51.191089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.103 [2024-11-27 19:15:51.191105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:42.362 19:15:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.362 [2024-11-27 19:15:51.785148] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:42.362 [2024-11-27 19:15:51.786857] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.362 19:15:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.362 malloc0 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.362 19:15:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.362 [2024-11-27 19:15:51.873582] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:42.362 [2024-11-27 19:15:51.873617] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:42.362 [2024-11-27 19:15:51.873626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:42.362 [2024-11-27 19:15:51.881174] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:42.362 [2024-11-27 19:15:51.881196] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:42.362 1 00:17:42.362 19:15:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.362 19:15:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74165 00:17:43.297 [2024-11-27 19:15:52.881220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:43.297 [2024-11-27 19:15:52.888150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:43.297 [2024-11-27 19:15:52.888165] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:44.671 [2024-11-27 19:15:53.888187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:44.671 [2024-11-27 19:15:53.892154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:44.671 [2024-11-27 19:15:53.892168] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:45.607 [2024-11-27 19:15:54.892186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:45.607 [2024-11-27 19:15:54.900159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:45.607 [2024-11-27 19:15:54.900171] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:45.607 [2024-11-27 19:15:54.900184] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:45.607 [2024-11-27 19:15:54.900257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:07.520 [2024-11-27 19:16:15.949160] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:07.520 [2024-11-27 19:16:15.956514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:07.520 [2024-11-27 19:16:15.964344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:07.520 [2024-11-27 19:16:15.964363] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:34.060 00:18:34.060 fio_test: (groupid=0, jobs=1): err= 0: pid=74169: Wed Nov 27 19:16:41 2024 00:18:34.060 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(3229MiB/60002msec) 00:18:34.060 slat (nsec): min=1172, max=205473, avg=5495.35, stdev=1561.69 00:18:34.060 clat (usec): min=934, max=30107k, avg=4597.49, stdev=264873.83 00:18:34.060 lat (usec): min=942, max=30107k, avg=4602.99, stdev=264873.83 00:18:34.060 clat percentiles (usec): 00:18:34.060 | 1.00th=[ 1844], 5.00th=[ 1926], 10.00th=[ 1975], 20.00th=[ 2073], 00:18:34.060 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2180], 00:18:34.060 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3163], 00:18:34.060 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 7570], 99.95th=[11994], 00:18:34.060 | 99.99th=[13435] 00:18:34.060 bw ( KiB/s): min=13040, max=124232, per=100.00%, avg=108540.13, stdev=16739.29, samples=60 00:18:34.060 iops : min= 3260, max=31058, avg=27135.03, stdev=4184.82, samples=60 00:18:34.060 write: IOPS=13.8k, BW=53.7MiB/s (56.4MB/s)(3225MiB/60002msec); 0 zone resets 00:18:34.060 slat (nsec): min=1214, max=214690, avg=5672.88, stdev=1456.37 00:18:34.060 clat (usec): min=889, max=30106k, avg=4688.04, stdev=265046.63 00:18:34.060 lat (usec): min=896, max=30107k, avg=4693.71, stdev=265046.62 00:18:34.060 clat percentiles (usec): 00:18:34.060 | 1.00th=[ 1909], 5.00th=[ 2024], 10.00th=[ 2057], 20.00th=[ 2147], 00:18:34.060 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2278], 00:18:34.060 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2376], 95.00th=[ 3097], 00:18:34.060 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7701], 99.95th=[12125], 00:18:34.060 | 99.99th=[13435] 00:18:34.060 bw ( KiB/s): min=13248, max=123320, per=100.00%, avg=108409.20, stdev=16806.60, samples=60 00:18:34.060 iops : min= 3312, max=30830, avg=27102.30, stdev=4201.65, samples=60 00:18:34.060 lat (usec) : 1000=0.01% 00:18:34.060 lat (msec) : 2=9.04%, 4=88.13%, 10=2.78%, 20=0.04%, >=2000=0.01% 00:18:34.060 cpu : usr=3.07%, sys=15.78%, ctx=54127, majf=0, minf=13 00:18:34.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:34.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.060 issued rwts: total=826573,825492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.060 00:18:34.060 Run status group 0 (all jobs): 00:18:34.060 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=3229MiB (3386MB), run=60002-60002msec 00:18:34.060 WRITE: bw=53.7MiB/s (56.4MB/s), 53.7MiB/s-53.7MiB/s (56.4MB/s-56.4MB/s), io=3225MiB (3381MB), run=60002-60002msec 00:18:34.060 00:18:34.060 Disk stats (read/write): 00:18:34.060 ublkb1: ios=823473/822484, merge=0/0, ticks=3747326/3745467, in_queue=7492794, util=99.91% 00:18:34.060 19:16:41 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:34.060 19:16:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.060 19:16:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 [2024-11-27 19:16:41.119287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:34.060 [2024-11-27 19:16:41.163162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:34.060 [2024-11-27 19:16:41.163356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:34.060 [2024-11-27 19:16:41.177157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:34.060 [2024-11-27 19:16:41.177265] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:34.060 [2024-11-27 19:16:41.177272] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:34.060 19:16:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.060 19:16:41 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:34.060 19:16:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.060 19:16:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.060 [2024-11-27 19:16:41.181321] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:34.061 [2024-11-27 19:16:41.188139] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.061 [2024-11-27 19:16:41.188173] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.061 19:16:41 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:34.061 19:16:41 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:34.061 19:16:41 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74271 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74271 ']' 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74271 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74271 00:18:34.061 killing process with pid 74271 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74271' 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74271 00:18:34.061 19:16:41 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74271 00:18:34.061 [2024-11-27 19:16:42.287140] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:34.061 [2024-11-27 19:16:42.287191] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.061 00:18:34.061 real 1m4.316s 00:18:34.061 user 1m44.420s 00:18:34.061 sys 0m24.793s 00:18:34.061 19:16:43 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.061 ************************************ 00:18:34.061 END TEST ublk_recovery 00:18:34.061 ************************************ 00:18:34.061 19:16:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 19:16:43 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:34.061 19:16:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:34.061 19:16:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.061 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 19:16:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:34.061 19:16:43 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.061 19:16:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.061 19:16:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.061 19:16:43 -- common/autotest_common.sh@10 -- # set +x 00:18:34.061 ************************************ 00:18:34.061 START TEST ftl 00:18:34.061 ************************************ 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.061 * Looking for test storage... 00:18:34.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.061 19:16:43 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.061 19:16:43 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.061 19:16:43 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.061 19:16:43 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.061 19:16:43 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.061 19:16:43 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:34.061 19:16:43 ftl -- scripts/common.sh@345 -- # : 1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.061 19:16:43 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.061 19:16:43 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@353 -- # local d=1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.061 19:16:43 ftl -- scripts/common.sh@355 -- # echo 1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.061 19:16:43 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@353 -- # local d=2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.061 19:16:43 ftl -- scripts/common.sh@355 -- # echo 2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.061 19:16:43 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.061 19:16:43 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.061 19:16:43 ftl -- scripts/common.sh@368 -- # return 0 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.061 --rc genhtml_branch_coverage=1 00:18:34.061 --rc genhtml_function_coverage=1 00:18:34.061 --rc genhtml_legend=1 00:18:34.061 --rc geninfo_all_blocks=1 00:18:34.061 --rc geninfo_unexecuted_blocks=1 00:18:34.061 00:18:34.061 ' 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.061 --rc genhtml_branch_coverage=1 00:18:34.061 --rc genhtml_function_coverage=1 00:18:34.061 --rc genhtml_legend=1 00:18:34.061 --rc geninfo_all_blocks=1 00:18:34.061 --rc geninfo_unexecuted_blocks=1 00:18:34.061 00:18:34.061 ' 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.061 --rc genhtml_branch_coverage=1 00:18:34.061 --rc genhtml_function_coverage=1 00:18:34.061 --rc genhtml_legend=1 00:18:34.061 --rc geninfo_all_blocks=1 00:18:34.061 --rc geninfo_unexecuted_blocks=1 00:18:34.061 00:18:34.061 ' 00:18:34.061 19:16:43 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.061 --rc genhtml_branch_coverage=1 00:18:34.061 --rc genhtml_function_coverage=1 00:18:34.061 --rc genhtml_legend=1 00:18:34.061 --rc geninfo_all_blocks=1 00:18:34.061 --rc geninfo_unexecuted_blocks=1 00:18:34.061 00:18:34.061 ' 00:18:34.061 19:16:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:34.062 19:16:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:34.062 19:16:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.062 19:16:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:34.062 19:16:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:34.062 19:16:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:34.062 19:16:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.062 19:16:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.062 19:16:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.062 19:16:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:34.062 19:16:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:34.062 19:16:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:34.062 19:16:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:34.062 19:16:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.062 19:16:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:34.062 19:16:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:34.062 19:16:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:34.062 19:16:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:34.062 19:16:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:34.062 19:16:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:34.062 19:16:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:34.062 19:16:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:34.062 19:16:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.062 19:16:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:34.062 19:16:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:34.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:34.355 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:34.355 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:34.355 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:34.355 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:34.355 19:16:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75072 00:18:34.355 19:16:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:34.355 19:16:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75072 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 75072 ']' 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.355 19:16:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:34.355 [2024-11-27 19:16:43.871363] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:34.355 [2024-11-27 19:16:43.871611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75072 ] 00:18:34.622 [2024-11-27 19:16:44.025402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.623 [2024-11-27 19:16:44.121807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.188 19:16:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.188 19:16:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:35.188 19:16:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:35.446 19:16:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:36.013 19:16:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:36.013 19:16:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:36.579 19:16:46 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:36.579 19:16:46 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:36.579 19:16:46 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@50 -- # break 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:36.837 19:16:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:37.096 19:16:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:37.096 19:16:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:37.096 19:16:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:37.096 19:16:46 ftl -- ftl/ftl.sh@63 -- # break 00:18:37.096 19:16:46 ftl -- ftl/ftl.sh@66 -- # killprocess 75072 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 75072 ']' 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@958 -- # kill -0 75072 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@959 -- # uname 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75072 00:18:37.096 killing process with pid 75072 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75072' 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@973 -- # kill 75072 00:18:37.096 19:16:46 ftl -- common/autotest_common.sh@978 -- # wait 75072 00:18:38.472 19:16:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:38.472 19:16:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:38.472 19:16:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:38.472 19:16:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.472 19:16:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:38.472 ************************************ 00:18:38.472 START TEST ftl_fio_basic 00:18:38.472 ************************************ 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:38.472 * Looking for test storage... 00:18:38.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.472 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:38.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.473 --rc genhtml_branch_coverage=1 00:18:38.473 --rc genhtml_function_coverage=1 00:18:38.473 --rc genhtml_legend=1 00:18:38.473 --rc geninfo_all_blocks=1 00:18:38.473 --rc geninfo_unexecuted_blocks=1 00:18:38.473 00:18:38.473 ' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:38.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.473 --rc genhtml_branch_coverage=1 00:18:38.473 --rc genhtml_function_coverage=1 00:18:38.473 --rc genhtml_legend=1 00:18:38.473 --rc geninfo_all_blocks=1 00:18:38.473 --rc geninfo_unexecuted_blocks=1 00:18:38.473 00:18:38.473 ' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:38.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.473 --rc genhtml_branch_coverage=1 00:18:38.473 --rc genhtml_function_coverage=1 00:18:38.473 --rc genhtml_legend=1 00:18:38.473 --rc geninfo_all_blocks=1 00:18:38.473 --rc geninfo_unexecuted_blocks=1 00:18:38.473 00:18:38.473 ' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:38.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.473 --rc genhtml_branch_coverage=1 00:18:38.473 --rc genhtml_function_coverage=1 00:18:38.473 --rc genhtml_legend=1 00:18:38.473 --rc geninfo_all_blocks=1 00:18:38.473 --rc geninfo_unexecuted_blocks=1 00:18:38.473 00:18:38.473 ' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75204 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75204 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75204 ']' 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:38.473 19:16:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.473 [2024-11-27 19:16:48.014784] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:38.473 [2024-11-27 19:16:48.015053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75204 ] 00:18:38.733 [2024-11-27 19:16:48.168336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.733 [2024-11-27 19:16:48.262305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.733 [2024-11-27 19:16:48.262500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.733 [2024-11-27 19:16:48.262500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:39.300 19:16:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:39.559 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.817 { 00:18:39.817 "name": "nvme0n1", 00:18:39.817 "aliases": [ 00:18:39.817 "f6523b2e-3ff9-4270-9eaa-d1cc652a3a83" 00:18:39.817 ], 00:18:39.817 "product_name": "NVMe disk", 00:18:39.817 "block_size": 4096, 00:18:39.817 "num_blocks": 1310720, 00:18:39.817 "uuid": "f6523b2e-3ff9-4270-9eaa-d1cc652a3a83", 00:18:39.817 "numa_id": -1, 00:18:39.817 "assigned_rate_limits": { 00:18:39.817 "rw_ios_per_sec": 0, 00:18:39.817 "rw_mbytes_per_sec": 0, 00:18:39.817 "r_mbytes_per_sec": 0, 00:18:39.817 "w_mbytes_per_sec": 0 00:18:39.817 }, 00:18:39.817 "claimed": false, 00:18:39.817 "zoned": false, 00:18:39.817 "supported_io_types": { 00:18:39.817 "read": true, 00:18:39.817 "write": true, 00:18:39.817 "unmap": true, 00:18:39.817 "flush": true, 00:18:39.817 "reset": true, 00:18:39.817 "nvme_admin": true, 00:18:39.817 "nvme_io": true, 00:18:39.817 "nvme_io_md": false, 00:18:39.817 "write_zeroes": true, 00:18:39.817 "zcopy": false, 00:18:39.817 "get_zone_info": false, 00:18:39.817 "zone_management": false, 00:18:39.817 "zone_append": false, 00:18:39.817 "compare": true, 00:18:39.817 "compare_and_write": false, 00:18:39.817 "abort": true, 00:18:39.817 "seek_hole": false, 00:18:39.817 "seek_data": false, 00:18:39.817 "copy": true, 00:18:39.817 "nvme_iov_md": false 00:18:39.817 }, 00:18:39.817 "driver_specific": { 00:18:39.817 "nvme": [ 00:18:39.817 { 00:18:39.817 "pci_address": "0000:00:11.0", 00:18:39.817 "trid": { 00:18:39.817 "trtype": "PCIe", 00:18:39.817 "traddr": "0000:00:11.0" 00:18:39.817 }, 00:18:39.817 "ctrlr_data": { 00:18:39.817 "cntlid": 0, 00:18:39.817 "vendor_id": "0x1b36", 00:18:39.817 "model_number": "QEMU NVMe Ctrl", 00:18:39.817 "serial_number": "12341", 00:18:39.817 "firmware_revision": "8.0.0", 00:18:39.817 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:39.817 "oacs": { 00:18:39.817 "security": 0, 00:18:39.817 "format": 1, 00:18:39.817 "firmware": 0, 00:18:39.817 "ns_manage": 1 00:18:39.817 }, 00:18:39.817 "multi_ctrlr": false, 00:18:39.817 "ana_reporting": false 00:18:39.817 }, 00:18:39.817 "vs": { 00:18:39.817 "nvme_version": "1.4" 00:18:39.817 }, 00:18:39.817 "ns_data": { 00:18:39.817 "id": 1, 00:18:39.817 "can_share": false 00:18:39.817 } 00:18:39.817 } 00:18:39.817 ], 00:18:39.817 "mp_policy": "active_passive" 00:18:39.817 } 00:18:39.817 } 00:18:39.817 ]' 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:39.817 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:40.075 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:40.075 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:40.075 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2a70052e-34ba-4c2e-9684-5bc6e1faef71 00:18:40.075 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2a70052e-34ba-4c2e-9684-5bc6e1faef71 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:40.333 19:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.591 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.591 { 00:18:40.591 "name": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:40.591 "aliases": [ 00:18:40.591 "lvs/nvme0n1p0" 00:18:40.591 ], 00:18:40.591 "product_name": "Logical Volume", 00:18:40.592 "block_size": 4096, 00:18:40.592 "num_blocks": 26476544, 00:18:40.592 "uuid": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:40.592 "assigned_rate_limits": { 00:18:40.592 "rw_ios_per_sec": 0, 00:18:40.592 "rw_mbytes_per_sec": 0, 00:18:40.592 "r_mbytes_per_sec": 0, 00:18:40.592 "w_mbytes_per_sec": 0 00:18:40.592 }, 00:18:40.592 "claimed": false, 00:18:40.592 "zoned": false, 00:18:40.592 "supported_io_types": { 00:18:40.592 "read": true, 00:18:40.592 "write": true, 00:18:40.592 "unmap": true, 00:18:40.592 "flush": false, 00:18:40.592 "reset": true, 00:18:40.592 "nvme_admin": false, 00:18:40.592 "nvme_io": false, 00:18:40.592 "nvme_io_md": false, 00:18:40.592 "write_zeroes": true, 00:18:40.592 "zcopy": false, 00:18:40.592 "get_zone_info": false, 00:18:40.592 "zone_management": false, 00:18:40.592 "zone_append": false, 00:18:40.592 "compare": false, 00:18:40.592 "compare_and_write": false, 00:18:40.592 "abort": false, 00:18:40.592 "seek_hole": true, 00:18:40.592 "seek_data": true, 00:18:40.592 "copy": false, 00:18:40.592 "nvme_iov_md": false 00:18:40.592 }, 00:18:40.592 "driver_specific": { 00:18:40.592 "lvol": { 00:18:40.592 "lvol_store_uuid": "2a70052e-34ba-4c2e-9684-5bc6e1faef71", 00:18:40.592 "base_bdev": "nvme0n1", 00:18:40.592 "thin_provision": true, 00:18:40.592 "num_allocated_clusters": 0, 00:18:40.592 "snapshot": false, 00:18:40.592 "clone": false, 00:18:40.592 "esnap_clone": false 00:18:40.592 } 00:18:40.592 } 00:18:40.592 } 00:18:40.592 ]' 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:40.592 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:40.851 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.110 { 00:18:41.110 "name": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:41.110 "aliases": [ 00:18:41.110 "lvs/nvme0n1p0" 00:18:41.110 ], 00:18:41.110 "product_name": "Logical Volume", 00:18:41.110 "block_size": 4096, 00:18:41.110 "num_blocks": 26476544, 00:18:41.110 "uuid": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:41.110 "assigned_rate_limits": { 00:18:41.110 "rw_ios_per_sec": 0, 00:18:41.110 "rw_mbytes_per_sec": 0, 00:18:41.110 "r_mbytes_per_sec": 0, 00:18:41.110 "w_mbytes_per_sec": 0 00:18:41.110 }, 00:18:41.110 "claimed": false, 00:18:41.110 "zoned": false, 00:18:41.110 "supported_io_types": { 00:18:41.110 "read": true, 00:18:41.110 "write": true, 00:18:41.110 "unmap": true, 00:18:41.110 "flush": false, 00:18:41.110 "reset": true, 00:18:41.110 "nvme_admin": false, 00:18:41.110 "nvme_io": false, 00:18:41.110 "nvme_io_md": false, 00:18:41.110 "write_zeroes": true, 00:18:41.110 "zcopy": false, 00:18:41.110 "get_zone_info": false, 00:18:41.110 "zone_management": false, 00:18:41.110 "zone_append": false, 00:18:41.110 "compare": false, 00:18:41.110 "compare_and_write": false, 00:18:41.110 "abort": false, 00:18:41.110 "seek_hole": true, 00:18:41.110 "seek_data": true, 00:18:41.110 "copy": false, 00:18:41.110 "nvme_iov_md": false 00:18:41.110 }, 00:18:41.110 "driver_specific": { 00:18:41.110 "lvol": { 00:18:41.110 "lvol_store_uuid": "2a70052e-34ba-4c2e-9684-5bc6e1faef71", 00:18:41.110 "base_bdev": "nvme0n1", 00:18:41.110 "thin_provision": true, 00:18:41.110 "num_allocated_clusters": 0, 00:18:41.110 "snapshot": false, 00:18:41.110 "clone": false, 00:18:41.110 "esnap_clone": false 00:18:41.110 } 00:18:41.110 } 00:18:41.110 } 00:18:41.110 ]' 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:41.110 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:41.111 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:41.111 19:16:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:41.369 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:41.369 19:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adc2f30f-03cd-4a22-a478-165c2a9ddbe2 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:41.628 { 00:18:41.628 "name": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:41.628 "aliases": [ 00:18:41.628 "lvs/nvme0n1p0" 00:18:41.628 ], 00:18:41.628 "product_name": "Logical Volume", 00:18:41.628 "block_size": 4096, 00:18:41.628 "num_blocks": 26476544, 00:18:41.628 "uuid": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:41.628 "assigned_rate_limits": { 00:18:41.628 "rw_ios_per_sec": 0, 00:18:41.628 "rw_mbytes_per_sec": 0, 00:18:41.628 "r_mbytes_per_sec": 0, 00:18:41.628 "w_mbytes_per_sec": 0 00:18:41.628 }, 00:18:41.628 "claimed": false, 00:18:41.628 "zoned": false, 00:18:41.628 "supported_io_types": { 00:18:41.628 "read": true, 00:18:41.628 "write": true, 00:18:41.628 "unmap": true, 00:18:41.628 "flush": false, 00:18:41.628 "reset": true, 00:18:41.628 "nvme_admin": false, 00:18:41.628 "nvme_io": false, 00:18:41.628 "nvme_io_md": false, 00:18:41.628 "write_zeroes": true, 00:18:41.628 "zcopy": false, 00:18:41.628 "get_zone_info": false, 00:18:41.628 "zone_management": false, 00:18:41.628 "zone_append": false, 00:18:41.628 "compare": false, 00:18:41.628 "compare_and_write": false, 00:18:41.628 "abort": false, 00:18:41.628 "seek_hole": true, 00:18:41.628 "seek_data": true, 00:18:41.628 "copy": false, 00:18:41.628 "nvme_iov_md": false 00:18:41.628 }, 00:18:41.628 "driver_specific": { 00:18:41.628 "lvol": { 00:18:41.628 "lvol_store_uuid": "2a70052e-34ba-4c2e-9684-5bc6e1faef71", 00:18:41.628 "base_bdev": "nvme0n1", 00:18:41.628 "thin_provision": true, 00:18:41.628 "num_allocated_clusters": 0, 00:18:41.628 "snapshot": false, 00:18:41.628 "clone": false, 00:18:41.628 "esnap_clone": false 00:18:41.628 } 00:18:41.628 } 00:18:41.628 } 00:18:41.628 ]' 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:41.628 19:16:51 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d adc2f30f-03cd-4a22-a478-165c2a9ddbe2 -c nvc0n1p0 --l2p_dram_limit 60 00:18:41.888 [2024-11-27 19:16:51.317973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.318016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.888 [2024-11-27 19:16:51.318030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.888 [2024-11-27 19:16:51.318036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.318092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.318100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.888 [2024-11-27 19:16:51.318109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:41.888 [2024-11-27 19:16:51.318115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.318156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.888 [2024-11-27 19:16:51.318686] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.888 [2024-11-27 19:16:51.318714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.318721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.888 [2024-11-27 19:16:51.318730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:18:41.888 [2024-11-27 19:16:51.318736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.318791] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aabd9c7e-2dfc-42ed-89c6-8e98857a6550 00:18:41.888 [2024-11-27 19:16:51.320079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.320108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:41.888 [2024-11-27 19:16:51.320116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:41.888 [2024-11-27 19:16:51.320135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.326864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.326894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.888 [2024-11-27 19:16:51.326902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:18:41.888 [2024-11-27 19:16:51.326914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.327000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.327011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.888 [2024-11-27 19:16:51.327017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:41.888 [2024-11-27 19:16:51.327028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.327079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.327089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.888 [2024-11-27 19:16:51.327096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:41.888 [2024-11-27 19:16:51.327103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.327141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.888 [2024-11-27 19:16:51.330348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.330371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.888 [2024-11-27 19:16:51.330383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:18:41.888 [2024-11-27 19:16:51.330389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.330427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.330433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.888 [2024-11-27 19:16:51.330442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:41.888 [2024-11-27 19:16:51.330448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.330469] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:41.888 [2024-11-27 19:16:51.330593] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:41.888 [2024-11-27 19:16:51.330607] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.888 [2024-11-27 19:16:51.330616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:41.888 [2024-11-27 19:16:51.330627] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.888 [2024-11-27 19:16:51.330634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.888 [2024-11-27 19:16:51.330642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:41.888 [2024-11-27 19:16:51.330648] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.888 [2024-11-27 19:16:51.330656] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:41.888 [2024-11-27 19:16:51.330662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:41.888 [2024-11-27 19:16:51.330671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.330677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.888 [2024-11-27 19:16:51.330686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:18:41.888 [2024-11-27 19:16:51.330692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.330765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.888 [2024-11-27 19:16:51.330772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.888 [2024-11-27 19:16:51.330780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:41.888 [2024-11-27 19:16:51.330785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.888 [2024-11-27 19:16:51.330884] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.888 [2024-11-27 19:16:51.330894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.888 [2024-11-27 19:16:51.330902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.888 [2024-11-27 19:16:51.330908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.888 [2024-11-27 19:16:51.330915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.888 [2024-11-27 19:16:51.330920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.888 [2024-11-27 19:16:51.330927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:41.888 [2024-11-27 19:16:51.330933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.888 [2024-11-27 19:16:51.330939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:41.888 [2024-11-27 19:16:51.330944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.888 [2024-11-27 19:16:51.330951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.888 [2024-11-27 19:16:51.330957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:41.889 [2024-11-27 19:16:51.330963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.889 [2024-11-27 19:16:51.330968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.889 [2024-11-27 19:16:51.330975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:41.889 [2024-11-27 19:16:51.330981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.889 [2024-11-27 19:16:51.330991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.889 [2024-11-27 19:16:51.330996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.889 [2024-11-27 19:16:51.331015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.889 [2024-11-27 19:16:51.331033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.889 [2024-11-27 19:16:51.331051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.889 [2024-11-27 19:16:51.331074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.889 [2024-11-27 19:16:51.331095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.889 [2024-11-27 19:16:51.331119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.889 [2024-11-27 19:16:51.331141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:41.889 [2024-11-27 19:16:51.331147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.889 [2024-11-27 19:16:51.331153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:41.889 [2024-11-27 19:16:51.331162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:41.889 [2024-11-27 19:16:51.331167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:41.889 [2024-11-27 19:16:51.331178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:41.889 [2024-11-27 19:16:51.331186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.889 [2024-11-27 19:16:51.331199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.889 [2024-11-27 19:16:51.331204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.889 [2024-11-27 19:16:51.331218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.889 [2024-11-27 19:16:51.331227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.889 [2024-11-27 19:16:51.331232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.889 [2024-11-27 19:16:51.331239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.889 [2024-11-27 19:16:51.331244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.889 [2024-11-27 19:16:51.331251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.889 [2024-11-27 19:16:51.331260] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.889 [2024-11-27 19:16:51.331269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:41.889 [2024-11-27 19:16:51.331284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:41.889 [2024-11-27 19:16:51.331290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:41.889 [2024-11-27 19:16:51.331297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:41.889 [2024-11-27 19:16:51.331302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:41.889 [2024-11-27 19:16:51.331308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:41.889 [2024-11-27 19:16:51.331316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:41.889 [2024-11-27 19:16:51.331323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:41.889 [2024-11-27 19:16:51.331329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:41.889 [2024-11-27 19:16:51.331337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:41.889 [2024-11-27 19:16:51.331370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.889 [2024-11-27 19:16:51.331379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.889 [2024-11-27 19:16:51.331392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.889 [2024-11-27 19:16:51.331397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.889 [2024-11-27 19:16:51.331405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.889 [2024-11-27 19:16:51.331411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.889 [2024-11-27 19:16:51.331418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.889 [2024-11-27 19:16:51.331424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:18:41.889 [2024-11-27 19:16:51.331431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.889 [2024-11-27 19:16:51.331504] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:41.889 [2024-11-27 19:16:51.331522] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:44.432 [2024-11-27 19:16:54.063110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.432 [2024-11-27 19:16:54.063167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:44.432 [2024-11-27 19:16:54.063182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2731.596 ms 00:18:44.432 [2024-11-27 19:16:54.063193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.694 [2024-11-27 19:16:54.090835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.694 [2024-11-27 19:16:54.090875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.694 [2024-11-27 19:16:54.090887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.426 ms 00:18:44.694 [2024-11-27 19:16:54.090898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.694 [2024-11-27 19:16:54.091030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.091044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.695 [2024-11-27 19:16:54.091053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:44.695 [2024-11-27 19:16:54.091065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.132577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.132621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.695 [2024-11-27 19:16:54.132633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.465 ms 00:18:44.695 [2024-11-27 19:16:54.132645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.132694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.132706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.695 [2024-11-27 19:16:54.132715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.695 [2024-11-27 19:16:54.132724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.133197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.133217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.695 [2024-11-27 19:16:54.133229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:18:44.695 [2024-11-27 19:16:54.133240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.133364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.133376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.695 [2024-11-27 19:16:54.133385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:44.695 [2024-11-27 19:16:54.133395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.149241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.149272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.695 [2024-11-27 19:16:54.149283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.818 ms 00:18:44.695 [2024-11-27 19:16:54.149293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.161742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:44.695 [2024-11-27 19:16:54.178793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.178964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.695 [2024-11-27 19:16:54.178986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.408 ms 00:18:44.695 [2024-11-27 19:16:54.178996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.239379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.239413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:44.695 [2024-11-27 19:16:54.239428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.345 ms 00:18:44.695 [2024-11-27 19:16:54.239436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.239624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.239635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.695 [2024-11-27 19:16:54.239648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:44.695 [2024-11-27 19:16:54.239656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.262705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.262739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:44.695 [2024-11-27 19:16:54.262752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.983 ms 00:18:44.695 [2024-11-27 19:16:54.262760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.285012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.285049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:44.695 [2024-11-27 19:16:54.285062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.211 ms 00:18:44.695 [2024-11-27 19:16:54.285069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.695 [2024-11-27 19:16:54.285668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.695 [2024-11-27 19:16:54.285684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.695 [2024-11-27 19:16:54.285695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:18:44.695 [2024-11-27 19:16:54.285703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.356698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.356820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.956 [2024-11-27 19:16:54.356845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.950 ms 00:18:44.956 [2024-11-27 19:16:54.356854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.381376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.381406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.956 [2024-11-27 19:16:54.381419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.443 ms 00:18:44.956 [2024-11-27 19:16:54.381427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.404203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.404233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:44.956 [2024-11-27 19:16:54.404245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.740 ms 00:18:44.956 [2024-11-27 19:16:54.404253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.427307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.427439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.956 [2024-11-27 19:16:54.427460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.022 ms 00:18:44.956 [2024-11-27 19:16:54.427468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.427501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.427510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.956 [2024-11-27 19:16:54.427525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:44.956 [2024-11-27 19:16:54.427533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.427623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.956 [2024-11-27 19:16:54.427633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.956 [2024-11-27 19:16:54.427644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:44.956 [2024-11-27 19:16:54.427652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.956 [2024-11-27 19:16:54.428640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3110.182 ms, result 0 00:18:44.956 { 00:18:44.956 "name": "ftl0", 00:18:44.956 "uuid": "aabd9c7e-2dfc-42ed-89c6-8e98857a6550" 00:18:44.956 } 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:44.956 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:45.218 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:45.218 [ 00:18:45.218 { 00:18:45.218 "name": "ftl0", 00:18:45.218 "aliases": [ 00:18:45.218 "aabd9c7e-2dfc-42ed-89c6-8e98857a6550" 00:18:45.218 ], 00:18:45.218 "product_name": "FTL disk", 00:18:45.218 "block_size": 4096, 00:18:45.218 "num_blocks": 20971520, 00:18:45.218 "uuid": "aabd9c7e-2dfc-42ed-89c6-8e98857a6550", 00:18:45.218 "assigned_rate_limits": { 00:18:45.218 "rw_ios_per_sec": 0, 00:18:45.218 "rw_mbytes_per_sec": 0, 00:18:45.218 "r_mbytes_per_sec": 0, 00:18:45.218 "w_mbytes_per_sec": 0 00:18:45.218 }, 00:18:45.218 "claimed": false, 00:18:45.218 "zoned": false, 00:18:45.218 "supported_io_types": { 00:18:45.218 "read": true, 00:18:45.218 "write": true, 00:18:45.218 "unmap": true, 00:18:45.218 "flush": true, 00:18:45.218 "reset": false, 00:18:45.218 "nvme_admin": false, 00:18:45.218 "nvme_io": false, 00:18:45.218 "nvme_io_md": false, 00:18:45.218 "write_zeroes": true, 00:18:45.218 "zcopy": false, 00:18:45.218 "get_zone_info": false, 00:18:45.218 "zone_management": false, 00:18:45.218 "zone_append": false, 00:18:45.218 "compare": false, 00:18:45.218 "compare_and_write": false, 00:18:45.218 "abort": false, 00:18:45.218 "seek_hole": false, 00:18:45.218 "seek_data": false, 00:18:45.218 "copy": false, 00:18:45.218 "nvme_iov_md": false 00:18:45.218 }, 00:18:45.218 "driver_specific": { 00:18:45.218 "ftl": { 00:18:45.218 "base_bdev": "adc2f30f-03cd-4a22-a478-165c2a9ddbe2", 00:18:45.218 "cache": "nvc0n1p0" 00:18:45.218 } 00:18:45.218 } 00:18:45.218 } 00:18:45.218 ] 00:18:45.218 19:16:54 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:45.218 19:16:54 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:45.218 19:16:54 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:45.477 19:16:55 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:45.477 19:16:55 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:45.737 [2024-11-27 19:16:55.241442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.241479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.737 [2024-11-27 19:16:55.241490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.737 [2024-11-27 19:16:55.241501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.241530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.737 [2024-11-27 19:16:55.243762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.243787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.737 [2024-11-27 19:16:55.243798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:18:45.737 [2024-11-27 19:16:55.243805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.244196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.244211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.737 [2024-11-27 19:16:55.244220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:45.737 [2024-11-27 19:16:55.244227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.246681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.246699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.737 [2024-11-27 19:16:55.246708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.426 ms 00:18:45.737 [2024-11-27 19:16:55.246715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.251551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.251572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.737 [2024-11-27 19:16:55.251581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:18:45.737 [2024-11-27 19:16:55.251587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.270129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.270155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.737 [2024-11-27 19:16:55.270177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.458 ms 00:18:45.737 [2024-11-27 19:16:55.270183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.282780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.282808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.737 [2024-11-27 19:16:55.282822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.558 ms 00:18:45.737 [2024-11-27 19:16:55.282830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.282981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.282990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.737 [2024-11-27 19:16:55.282999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:45.737 [2024-11-27 19:16:55.283005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.300893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.300918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:45.737 [2024-11-27 19:16:55.300928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:18:45.737 [2024-11-27 19:16:55.300934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.318444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.318468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:45.737 [2024-11-27 19:16:55.318478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.471 ms 00:18:45.737 [2024-11-27 19:16:55.318484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.335468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.335492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.737 [2024-11-27 19:16:55.335503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.942 ms 00:18:45.737 [2024-11-27 19:16:55.335509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.737 [2024-11-27 19:16:55.352750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.737 [2024-11-27 19:16:55.352774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.737 [2024-11-27 19:16:55.352784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.160 ms 00:18:45.738 [2024-11-27 19:16:55.352789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.738 [2024-11-27 19:16:55.352824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.738 [2024-11-27 19:16:55.352835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.352999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.738 [2024-11-27 19:16:55.353468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.739 [2024-11-27 19:16:55.353572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.739 [2024-11-27 19:16:55.353580] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aabd9c7e-2dfc-42ed-89c6-8e98857a6550 00:18:45.739 [2024-11-27 19:16:55.353587] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.739 [2024-11-27 19:16:55.353596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.739 [2024-11-27 19:16:55.353602] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.739 [2024-11-27 19:16:55.353610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.739 [2024-11-27 19:16:55.353615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.739 [2024-11-27 19:16:55.353622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.739 [2024-11-27 19:16:55.353628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.739 [2024-11-27 19:16:55.353634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.739 [2024-11-27 19:16:55.353638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.739 [2024-11-27 19:16:55.353645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.739 [2024-11-27 19:16:55.353652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.739 [2024-11-27 19:16:55.353660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:18:45.739 [2024-11-27 19:16:55.353665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.739 [2024-11-27 19:16:55.363734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.739 [2024-11-27 19:16:55.363758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.739 [2024-11-27 19:16:55.363768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.031 ms 00:18:45.739 [2024-11-27 19:16:55.363774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.739 [2024-11-27 19:16:55.364066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.739 [2024-11-27 19:16:55.364074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.739 [2024-11-27 19:16:55.364082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:45.739 [2024-11-27 19:16:55.364087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.400677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.400703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.998 [2024-11-27 19:16:55.400714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.400720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.400779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.400785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.998 [2024-11-27 19:16:55.400793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.400799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.400873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.400884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.998 [2024-11-27 19:16:55.400893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.400899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.400923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.400930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.998 [2024-11-27 19:16:55.400938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.400944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.467947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.468103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.998 [2024-11-27 19:16:55.468120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.468140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.519594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.519626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.998 [2024-11-27 19:16:55.519636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.519643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.519735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.519743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.998 [2024-11-27 19:16:55.519754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.519760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.519814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.519823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.998 [2024-11-27 19:16:55.519831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.519837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.519930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.519939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.998 [2024-11-27 19:16:55.519949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.519955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.520003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.520010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.998 [2024-11-27 19:16:55.520019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.520025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.520070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.520077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.998 [2024-11-27 19:16:55.520084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.520091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.520157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.998 [2024-11-27 19:16:55.520166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.998 [2024-11-27 19:16:55.520174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.998 [2024-11-27 19:16:55.520181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.998 [2024-11-27 19:16:55.520342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.867 ms, result 0 00:18:45.998 true 00:18:45.998 19:16:55 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75204 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75204 ']' 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75204 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75204 00:18:45.999 killing process with pid 75204 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75204' 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75204 00:18:45.999 19:16:55 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75204 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.586 19:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:52.586 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:52.586 fio-3.35 00:18:52.586 Starting 1 thread 00:18:56.844 00:18:56.844 test: (groupid=0, jobs=1): err= 0: pid=75383: Wed Nov 27 19:17:06 2024 00:18:56.844 read: IOPS=1096, BW=72.8MiB/s (76.3MB/s)(255MiB/3496msec) 00:18:56.844 slat (nsec): min=4209, max=26418, avg=6133.61, stdev=2665.61 00:18:56.844 clat (usec): min=261, max=1344, avg=413.65, stdev=147.64 00:18:56.844 lat (usec): min=266, max=1354, avg=419.78, stdev=149.28 00:18:56.844 clat percentiles (usec): 00:18:56.844 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:18:56.844 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 371], 00:18:56.844 | 70.00th=[ 412], 80.00th=[ 494], 90.00th=[ 562], 95.00th=[ 857], 00:18:56.844 | 99.00th=[ 963], 99.50th=[ 1074], 99.90th=[ 1188], 99.95th=[ 1205], 00:18:56.844 | 99.99th=[ 1352] 00:18:56.844 write: IOPS=1103, BW=73.3MiB/s (76.9MB/s)(256MiB/3493msec); 0 zone resets 00:18:56.844 slat (usec): min=14, max=116, avg=20.11, stdev= 4.42 00:18:56.844 clat (usec): min=305, max=1737, avg=455.78, stdev=165.42 00:18:56.844 lat (usec): min=326, max=1757, avg=475.89, stdev=167.64 00:18:56.844 clat percentiles (usec): 00:18:56.844 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 355], 00:18:56.844 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 412], 00:18:56.844 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 652], 95.00th=[ 824], 00:18:56.844 | 99.00th=[ 1037], 99.50th=[ 1172], 99.90th=[ 1647], 99.95th=[ 1729], 00:18:56.844 | 99.99th=[ 1745] 00:18:56.844 bw ( KiB/s): min=52496, max=87312, per=97.38%, avg=73100.00, stdev=15528.97, samples=6 00:18:56.844 iops : min= 772, max= 1284, avg=1075.00, stdev=228.37, samples=6 00:18:56.844 lat (usec) : 500=78.64%, 750=15.88%, 1000=4.37% 00:18:56.844 lat (msec) : 2=1.11% 00:18:56.844 cpu : usr=99.23%, sys=0.09%, ctx=5, majf=0, minf=1169 00:18:56.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:56.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.844 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:56.844 00:18:56.844 Run status group 0 (all jobs): 00:18:56.844 READ: bw=72.8MiB/s (76.3MB/s), 72.8MiB/s-72.8MiB/s (76.3MB/s-76.3MB/s), io=255MiB (267MB), run=3496-3496msec 00:18:56.844 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=256MiB (269MB), run=3493-3493msec 00:18:58.230 ----------------------------------------------------- 00:18:58.230 Suppressions used: 00:18:58.230 count bytes template 00:18:58.230 1 5 /usr/src/fio/parse.c 00:18:58.230 1 8 libtcmalloc_minimal.so 00:18:58.230 1 904 libcrypto.so 00:18:58.230 ----------------------------------------------------- 00:18:58.230 00:18:58.230 19:17:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:58.230 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.230 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.492 19:17:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:58.492 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:58.492 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:58.492 fio-3.35 00:18:58.492 Starting 2 threads 00:19:25.052 00:19:25.052 first_half: (groupid=0, jobs=1): err= 0: pid=75486: Wed Nov 27 19:17:32 2024 00:19:25.052 read: IOPS=2862, BW=11.2MiB/s (11.7MB/s)(256MiB/22877msec) 00:19:25.052 slat (usec): min=2, max=1161, avg= 4.61, stdev= 5.64 00:19:25.052 clat (usec): min=1332, max=326280, avg=37472.01, stdev=26855.24 00:19:25.052 lat (usec): min=1344, max=326285, avg=37476.63, stdev=26855.55 00:19:25.052 clat percentiles (msec): 00:19:25.052 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 30], 00:19:25.052 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:25.052 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 45], 95.00th=[ 78], 00:19:25.052 | 99.00th=[ 174], 99.50th=[ 215], 99.90th=[ 275], 99.95th=[ 296], 00:19:25.052 | 99.99th=[ 321] 00:19:25.052 write: IOPS=2870, BW=11.2MiB/s (11.8MB/s)(256MiB/22830msec); 0 zone resets 00:19:25.052 slat (usec): min=3, max=951, avg= 6.07, stdev= 6.79 00:19:25.052 clat (usec): min=335, max=68128, avg=7212.09, stdev=6993.75 00:19:25.052 lat (usec): min=355, max=68137, avg=7218.16, stdev=6993.87 00:19:25.052 clat percentiles (usec): 00:19:25.052 | 1.00th=[ 734], 5.00th=[ 898], 10.00th=[ 1663], 20.00th=[ 3097], 00:19:25.052 | 30.00th=[ 4293], 40.00th=[ 4948], 50.00th=[ 5538], 60.00th=[ 5997], 00:19:25.052 | 70.00th=[ 7046], 80.00th=[ 9503], 90.00th=[13042], 95.00th=[21627], 00:19:25.052 | 99.00th=[37487], 99.50th=[45351], 99.90th=[55313], 99.95th=[62129], 00:19:25.052 | 99.99th=[66847] 00:19:25.052 bw ( KiB/s): min= 1416, max=42056, per=100.00%, avg=24797.90, stdev=13346.95, samples=21 00:19:25.052 iops : min= 354, max=10514, avg=6199.48, stdev=3336.74, samples=21 00:19:25.052 lat (usec) : 500=0.03%, 750=0.63%, 1000=2.42% 00:19:25.052 lat (msec) : 2=2.90%, 4=7.66%, 10=27.28%, 20=8.10%, 50=46.92% 00:19:25.052 lat (msec) : 100=2.13%, 250=1.88%, 500=0.07% 00:19:25.052 cpu : usr=98.68%, sys=0.30%, ctx=162, majf=0, minf=5528 00:19:25.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:25.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.052 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.052 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.052 second_half: (groupid=0, jobs=1): err= 0: pid=75487: Wed Nov 27 19:17:32 2024 00:19:25.052 read: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(256MiB/22591msec) 00:19:25.052 slat (nsec): min=3188, max=44788, avg=5332.75, stdev=1478.14 00:19:25.052 clat (msec): min=9, max=289, avg=37.82, stdev=25.65 00:19:25.052 lat (msec): min=9, max=289, avg=37.82, stdev=25.65 00:19:25.052 clat percentiles (msec): 00:19:25.052 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:19:25.052 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:25.052 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 45], 95.00th=[ 75], 00:19:25.052 | 99.00th=[ 169], 99.50th=[ 209], 99.90th=[ 264], 99.95th=[ 271], 00:19:25.052 | 99.99th=[ 284] 00:19:25.052 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(256MiB/22473msec); 0 zone resets 00:19:25.052 slat (usec): min=3, max=609, avg= 6.52, stdev= 5.45 00:19:25.052 clat (usec): min=337, max=49792, avg=6313.99, stdev=4110.42 00:19:25.052 lat (usec): min=346, max=49797, avg=6320.51, stdev=4110.60 00:19:25.052 clat percentiles (usec): 00:19:25.052 | 1.00th=[ 881], 5.00th=[ 1909], 10.00th=[ 2474], 20.00th=[ 3326], 00:19:25.052 | 30.00th=[ 4047], 40.00th=[ 4817], 50.00th=[ 5407], 60.00th=[ 5866], 00:19:25.052 | 70.00th=[ 6980], 80.00th=[ 8979], 90.00th=[11863], 95.00th=[13566], 00:19:25.052 | 99.00th=[17695], 99.50th=[23987], 99.90th=[43779], 99.95th=[46400], 00:19:25.052 | 99.99th=[49021] 00:19:25.052 bw ( KiB/s): min= 2064, max=38416, per=91.32%, avg=20971.52, stdev=12416.00, samples=25 00:19:25.052 iops : min= 516, max= 9604, avg=5242.80, stdev=3104.02, samples=25 00:19:25.052 lat (usec) : 500=0.03%, 750=0.23%, 1000=0.55% 00:19:25.052 lat (msec) : 2=1.91%, 4=12.08%, 10=26.87%, 20=8.07%, 50=46.16% 00:19:25.052 lat (msec) : 100=2.33%, 250=1.67%, 500=0.10% 00:19:25.052 cpu : usr=99.22%, sys=0.15%, ctx=41, majf=0, minf=5587 00:19:25.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:25.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.052 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.052 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.052 00:19:25.052 Run status group 0 (all jobs): 00:19:25.052 READ: bw=22.4MiB/s (23.4MB/s), 11.2MiB/s-11.3MiB/s (11.7MB/s-11.9MB/s), io=512MiB (536MB), run=22591-22877msec 00:19:25.052 WRITE: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.4MiB/s (11.8MB/s-11.9MB/s), io=512MiB (537MB), run=22473-22830msec 00:19:25.314 ----------------------------------------------------- 00:19:25.314 Suppressions used: 00:19:25.314 count bytes template 00:19:25.314 2 10 /usr/src/fio/parse.c 00:19:25.314 4 384 /usr/src/fio/iolog.c 00:19:25.314 1 8 libtcmalloc_minimal.so 00:19:25.314 1 904 libcrypto.so 00:19:25.314 ----------------------------------------------------- 00:19:25.314 00:19:25.314 19:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:25.314 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.314 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:25.576 19:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:25.576 19:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:25.576 19:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:25.576 19:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:25.577 19:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:25.577 19:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:25.577 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:25.577 fio-3.35 00:19:25.577 Starting 1 thread 00:19:43.710 00:19:43.710 test: (groupid=0, jobs=1): err= 0: pid=75789: Wed Nov 27 19:17:51 2024 00:19:43.710 read: IOPS=7268, BW=28.4MiB/s (29.8MB/s)(255MiB/8971msec) 00:19:43.710 slat (nsec): min=3111, max=22524, avg=4893.23, stdev=1219.35 00:19:43.710 clat (usec): min=554, max=40207, avg=17602.22, stdev=2297.31 00:19:43.710 lat (usec): min=558, max=40213, avg=17607.11, stdev=2297.31 00:19:43.710 clat percentiles (usec): 00:19:43.710 | 1.00th=[14877], 5.00th=[15401], 10.00th=[15664], 20.00th=[15926], 00:19:43.710 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:19:43.710 | 70.00th=[18220], 80.00th=[19268], 90.00th=[20841], 95.00th=[21890], 00:19:43.710 | 99.00th=[25035], 99.50th=[26608], 99.90th=[31065], 99.95th=[34866], 00:19:43.710 | 99.99th=[40109] 00:19:43.710 write: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(256MiB/6328msec); 0 zone resets 00:19:43.710 slat (usec): min=4, max=273, avg= 7.58, stdev= 3.95 00:19:43.710 clat (usec): min=494, max=56660, avg=12310.83, stdev=13409.81 00:19:43.710 lat (usec): min=500, max=56668, avg=12318.41, stdev=13409.98 00:19:43.710 clat percentiles (usec): 00:19:43.710 | 1.00th=[ 742], 5.00th=[ 971], 10.00th=[ 1123], 20.00th=[ 1336], 00:19:43.710 | 30.00th=[ 1631], 40.00th=[ 2311], 50.00th=[ 9110], 60.00th=[11469], 00:19:43.710 | 70.00th=[15139], 80.00th=[17695], 90.00th=[36963], 95.00th=[42206], 00:19:43.710 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52691], 99.95th=[53216], 00:19:43.710 | 99.99th=[55313] 00:19:43.710 bw ( KiB/s): min=28904, max=52960, per=97.35%, avg=40329.85, stdev=7865.79, samples=13 00:19:43.710 iops : min= 7226, max=13240, avg=10082.46, stdev=1966.45, samples=13 00:19:43.710 lat (usec) : 500=0.01%, 750=0.54%, 1000=2.44% 00:19:43.710 lat (msec) : 2=15.41%, 4=2.63%, 10=6.19%, 20=57.14%, 50=15.22% 00:19:43.710 lat (msec) : 100=0.42% 00:19:43.710 cpu : usr=98.99%, sys=0.18%, ctx=179, majf=0, minf=5565 00:19:43.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:43.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.710 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:43.710 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:43.710 00:19:43.710 Run status group 0 (all jobs): 00:19:43.710 READ: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=255MiB (267MB), run=8971-8971msec 00:19:43.710 WRITE: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=256MiB (268MB), run=6328-6328msec 00:19:43.972 ----------------------------------------------------- 00:19:43.972 Suppressions used: 00:19:43.972 count bytes template 00:19:43.972 1 5 /usr/src/fio/parse.c 00:19:43.972 2 192 /usr/src/fio/iolog.c 00:19:43.972 1 8 libtcmalloc_minimal.so 00:19:43.972 1 904 libcrypto.so 00:19:43.972 ----------------------------------------------------- 00:19:43.972 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.972 Remove shared memory files 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57171 /dev/shm/spdk_tgt_trace.pid74131 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:43.972 ************************************ 00:19:43.972 END TEST ftl_fio_basic 00:19:43.972 ************************************ 00:19:43.972 00:19:43.972 real 1m5.811s 00:19:43.972 user 2m17.613s 00:19:43.972 sys 0m3.001s 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.972 19:17:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:44.235 19:17:53 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:44.235 19:17:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:44.235 19:17:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.235 19:17:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.235 ************************************ 00:19:44.235 START TEST ftl_bdevperf 00:19:44.235 ************************************ 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:44.235 * Looking for test storage... 00:19:44.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.235 --rc genhtml_branch_coverage=1 00:19:44.235 --rc genhtml_function_coverage=1 00:19:44.235 --rc genhtml_legend=1 00:19:44.235 --rc geninfo_all_blocks=1 00:19:44.235 --rc geninfo_unexecuted_blocks=1 00:19:44.235 00:19:44.235 ' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.235 --rc genhtml_branch_coverage=1 00:19:44.235 --rc genhtml_function_coverage=1 00:19:44.235 --rc genhtml_legend=1 00:19:44.235 --rc geninfo_all_blocks=1 00:19:44.235 --rc geninfo_unexecuted_blocks=1 00:19:44.235 00:19:44.235 ' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.235 --rc genhtml_branch_coverage=1 00:19:44.235 --rc genhtml_function_coverage=1 00:19:44.235 --rc genhtml_legend=1 00:19:44.235 --rc geninfo_all_blocks=1 00:19:44.235 --rc geninfo_unexecuted_blocks=1 00:19:44.235 00:19:44.235 ' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.235 --rc genhtml_branch_coverage=1 00:19:44.235 --rc genhtml_function_coverage=1 00:19:44.235 --rc genhtml_legend=1 00:19:44.235 --rc geninfo_all_blocks=1 00:19:44.235 --rc geninfo_unexecuted_blocks=1 00:19:44.235 00:19:44.235 ' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.235 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76051 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76051 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76051 ']' 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.236 19:17:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:44.498 [2024-11-27 19:17:53.887731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:44.498 [2024-11-27 19:17:53.888261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:19:44.498 [2024-11-27 19:17:54.054592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.760 [2024-11-27 19:17:54.173757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:45.347 19:17:54 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:45.608 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:45.869 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.869 { 00:19:45.869 "name": "nvme0n1", 00:19:45.869 "aliases": [ 00:19:45.869 "8ba88e74-671f-4cd9-b557-ed841a577777" 00:19:45.869 ], 00:19:45.869 "product_name": "NVMe disk", 00:19:45.869 "block_size": 4096, 00:19:45.869 "num_blocks": 1310720, 00:19:45.869 "uuid": "8ba88e74-671f-4cd9-b557-ed841a577777", 00:19:45.869 "numa_id": -1, 00:19:45.869 "assigned_rate_limits": { 00:19:45.869 "rw_ios_per_sec": 0, 00:19:45.869 "rw_mbytes_per_sec": 0, 00:19:45.869 "r_mbytes_per_sec": 0, 00:19:45.869 "w_mbytes_per_sec": 0 00:19:45.869 }, 00:19:45.869 "claimed": true, 00:19:45.869 "claim_type": "read_many_write_one", 00:19:45.869 "zoned": false, 00:19:45.869 "supported_io_types": { 00:19:45.869 "read": true, 00:19:45.869 "write": true, 00:19:45.869 "unmap": true, 00:19:45.869 "flush": true, 00:19:45.869 "reset": true, 00:19:45.869 "nvme_admin": true, 00:19:45.869 "nvme_io": true, 00:19:45.869 "nvme_io_md": false, 00:19:45.869 "write_zeroes": true, 00:19:45.869 "zcopy": false, 00:19:45.869 "get_zone_info": false, 00:19:45.869 "zone_management": false, 00:19:45.869 "zone_append": false, 00:19:45.869 "compare": true, 00:19:45.869 "compare_and_write": false, 00:19:45.869 "abort": true, 00:19:45.869 "seek_hole": false, 00:19:45.869 "seek_data": false, 00:19:45.869 "copy": true, 00:19:45.869 "nvme_iov_md": false 00:19:45.869 }, 00:19:45.869 "driver_specific": { 00:19:45.869 "nvme": [ 00:19:45.869 { 00:19:45.869 "pci_address": "0000:00:11.0", 00:19:45.869 "trid": { 00:19:45.869 "trtype": "PCIe", 00:19:45.869 "traddr": "0000:00:11.0" 00:19:45.869 }, 00:19:45.869 "ctrlr_data": { 00:19:45.869 "cntlid": 0, 00:19:45.869 "vendor_id": "0x1b36", 00:19:45.869 "model_number": "QEMU NVMe Ctrl", 00:19:45.869 "serial_number": "12341", 00:19:45.869 "firmware_revision": "8.0.0", 00:19:45.869 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:45.869 "oacs": { 00:19:45.869 "security": 0, 00:19:45.869 "format": 1, 00:19:45.869 "firmware": 0, 00:19:45.869 "ns_manage": 1 00:19:45.869 }, 00:19:45.869 "multi_ctrlr": false, 00:19:45.869 "ana_reporting": false 00:19:45.869 }, 00:19:45.869 "vs": { 00:19:45.869 "nvme_version": "1.4" 00:19:45.869 }, 00:19:45.869 "ns_data": { 00:19:45.869 "id": 1, 00:19:45.869 "can_share": false 00:19:45.869 } 00:19:45.869 } 00:19:45.869 ], 00:19:45.869 "mp_policy": "active_passive" 00:19:45.869 } 00:19:45.869 } 00:19:45.869 ]' 00:19:45.869 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.869 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.870 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:46.131 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2a70052e-34ba-4c2e-9684-5bc6e1faef71 00:19:46.131 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:46.131 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a70052e-34ba-4c2e-9684-5bc6e1faef71 00:19:46.392 19:17:55 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:46.392 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=55032bee-1879-4b2e-a300-35ff347395ce 00:19:46.392 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 55032bee-1879-4b2e-a300-35ff347395ce 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:46.652 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:46.911 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.911 { 00:19:46.911 "name": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:46.911 "aliases": [ 00:19:46.911 "lvs/nvme0n1p0" 00:19:46.911 ], 00:19:46.911 "product_name": "Logical Volume", 00:19:46.911 "block_size": 4096, 00:19:46.911 "num_blocks": 26476544, 00:19:46.911 "uuid": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:46.911 "assigned_rate_limits": { 00:19:46.911 "rw_ios_per_sec": 0, 00:19:46.911 "rw_mbytes_per_sec": 0, 00:19:46.912 "r_mbytes_per_sec": 0, 00:19:46.912 "w_mbytes_per_sec": 0 00:19:46.912 }, 00:19:46.912 "claimed": false, 00:19:46.912 "zoned": false, 00:19:46.912 "supported_io_types": { 00:19:46.912 "read": true, 00:19:46.912 "write": true, 00:19:46.912 "unmap": true, 00:19:46.912 "flush": false, 00:19:46.912 "reset": true, 00:19:46.912 "nvme_admin": false, 00:19:46.912 "nvme_io": false, 00:19:46.912 "nvme_io_md": false, 00:19:46.912 "write_zeroes": true, 00:19:46.912 "zcopy": false, 00:19:46.912 "get_zone_info": false, 00:19:46.912 "zone_management": false, 00:19:46.912 "zone_append": false, 00:19:46.912 "compare": false, 00:19:46.912 "compare_and_write": false, 00:19:46.912 "abort": false, 00:19:46.912 "seek_hole": true, 00:19:46.912 "seek_data": true, 00:19:46.912 "copy": false, 00:19:46.912 "nvme_iov_md": false 00:19:46.912 }, 00:19:46.912 "driver_specific": { 00:19:46.912 "lvol": { 00:19:46.912 "lvol_store_uuid": "55032bee-1879-4b2e-a300-35ff347395ce", 00:19:46.912 "base_bdev": "nvme0n1", 00:19:46.912 "thin_provision": true, 00:19:46.912 "num_allocated_clusters": 0, 00:19:46.912 "snapshot": false, 00:19:46.912 "clone": false, 00:19:46.912 "esnap_clone": false 00:19:46.912 } 00:19:46.912 } 00:19:46.912 } 00:19:46.912 ]' 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:46.912 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:47.170 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.429 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.429 { 00:19:47.429 "name": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:47.429 "aliases": [ 00:19:47.429 "lvs/nvme0n1p0" 00:19:47.429 ], 00:19:47.429 "product_name": "Logical Volume", 00:19:47.429 "block_size": 4096, 00:19:47.429 "num_blocks": 26476544, 00:19:47.429 "uuid": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:47.429 "assigned_rate_limits": { 00:19:47.429 "rw_ios_per_sec": 0, 00:19:47.429 "rw_mbytes_per_sec": 0, 00:19:47.429 "r_mbytes_per_sec": 0, 00:19:47.429 "w_mbytes_per_sec": 0 00:19:47.429 }, 00:19:47.429 "claimed": false, 00:19:47.429 "zoned": false, 00:19:47.429 "supported_io_types": { 00:19:47.429 "read": true, 00:19:47.429 "write": true, 00:19:47.429 "unmap": true, 00:19:47.429 "flush": false, 00:19:47.429 "reset": true, 00:19:47.429 "nvme_admin": false, 00:19:47.429 "nvme_io": false, 00:19:47.429 "nvme_io_md": false, 00:19:47.429 "write_zeroes": true, 00:19:47.429 "zcopy": false, 00:19:47.429 "get_zone_info": false, 00:19:47.429 "zone_management": false, 00:19:47.429 "zone_append": false, 00:19:47.429 "compare": false, 00:19:47.429 "compare_and_write": false, 00:19:47.429 "abort": false, 00:19:47.429 "seek_hole": true, 00:19:47.429 "seek_data": true, 00:19:47.429 "copy": false, 00:19:47.429 "nvme_iov_md": false 00:19:47.429 }, 00:19:47.429 "driver_specific": { 00:19:47.429 "lvol": { 00:19:47.429 "lvol_store_uuid": "55032bee-1879-4b2e-a300-35ff347395ce", 00:19:47.429 "base_bdev": "nvme0n1", 00:19:47.429 "thin_provision": true, 00:19:47.429 "num_allocated_clusters": 0, 00:19:47.429 "snapshot": false, 00:19:47.429 "clone": false, 00:19:47.429 "esnap_clone": false 00:19:47.429 } 00:19:47.429 } 00:19:47.429 } 00:19:47.429 ]' 00:19:47.429 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.429 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.429 19:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.429 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.429 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.429 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.429 19:17:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:47.429 19:17:57 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:47.687 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81b9463d-9bd2-4a09-b855-e7a577dc3456 00:19:47.944 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.944 { 00:19:47.944 "name": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:47.944 "aliases": [ 00:19:47.944 "lvs/nvme0n1p0" 00:19:47.944 ], 00:19:47.944 "product_name": "Logical Volume", 00:19:47.944 "block_size": 4096, 00:19:47.944 "num_blocks": 26476544, 00:19:47.944 "uuid": "81b9463d-9bd2-4a09-b855-e7a577dc3456", 00:19:47.944 "assigned_rate_limits": { 00:19:47.944 "rw_ios_per_sec": 0, 00:19:47.944 "rw_mbytes_per_sec": 0, 00:19:47.944 "r_mbytes_per_sec": 0, 00:19:47.944 "w_mbytes_per_sec": 0 00:19:47.944 }, 00:19:47.944 "claimed": false, 00:19:47.944 "zoned": false, 00:19:47.945 "supported_io_types": { 00:19:47.945 "read": true, 00:19:47.945 "write": true, 00:19:47.945 "unmap": true, 00:19:47.945 "flush": false, 00:19:47.945 "reset": true, 00:19:47.945 "nvme_admin": false, 00:19:47.945 "nvme_io": false, 00:19:47.945 "nvme_io_md": false, 00:19:47.945 "write_zeroes": true, 00:19:47.945 "zcopy": false, 00:19:47.945 "get_zone_info": false, 00:19:47.945 "zone_management": false, 00:19:47.945 "zone_append": false, 00:19:47.945 "compare": false, 00:19:47.945 "compare_and_write": false, 00:19:47.945 "abort": false, 00:19:47.945 "seek_hole": true, 00:19:47.945 "seek_data": true, 00:19:47.945 "copy": false, 00:19:47.945 "nvme_iov_md": false 00:19:47.945 }, 00:19:47.945 "driver_specific": { 00:19:47.945 "lvol": { 00:19:47.945 "lvol_store_uuid": "55032bee-1879-4b2e-a300-35ff347395ce", 00:19:47.945 "base_bdev": "nvme0n1", 00:19:47.945 "thin_provision": true, 00:19:47.945 "num_allocated_clusters": 0, 00:19:47.945 "snapshot": false, 00:19:47.945 "clone": false, 00:19:47.945 "esnap_clone": false 00:19:47.945 } 00:19:47.945 } 00:19:47.945 } 00:19:47.945 ]' 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:47.945 19:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 81b9463d-9bd2-4a09-b855-e7a577dc3456 -c nvc0n1p0 --l2p_dram_limit 20 00:19:48.207 [2024-11-27 19:17:57.662115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.662168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.207 [2024-11-27 19:17:57.662180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.207 [2024-11-27 19:17:57.662190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.662232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.662243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.207 [2024-11-27 19:17:57.662249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:48.207 [2024-11-27 19:17:57.662257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.662270] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.207 [2024-11-27 19:17:57.662843] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.207 [2024-11-27 19:17:57.662858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.662867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.207 [2024-11-27 19:17:57.662875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:19:48.207 [2024-11-27 19:17:57.662883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.662931] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f393fd20-db8b-4346-bcfc-63929e7e7c28 00:19:48.207 [2024-11-27 19:17:57.664200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.664227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:48.207 [2024-11-27 19:17:57.664240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:48.207 [2024-11-27 19:17:57.664247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.671100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.671136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.207 [2024-11-27 19:17:57.671147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.819 ms 00:19:48.207 [2024-11-27 19:17:57.671154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.671235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.671244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.207 [2024-11-27 19:17:57.671255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:48.207 [2024-11-27 19:17:57.671261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.671294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.671303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.207 [2024-11-27 19:17:57.671310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.207 [2024-11-27 19:17:57.671316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.671333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.207 [2024-11-27 19:17:57.674569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.674700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.207 [2024-11-27 19:17:57.674713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:19:48.207 [2024-11-27 19:17:57.674723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.674752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.674760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.207 [2024-11-27 19:17:57.674767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:48.207 [2024-11-27 19:17:57.674774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.674791] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:48.207 [2024-11-27 19:17:57.674984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.207 [2024-11-27 19:17:57.674995] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.207 [2024-11-27 19:17:57.675006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.207 [2024-11-27 19:17:57.675015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675025] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:48.207 [2024-11-27 19:17:57.675039] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.207 [2024-11-27 19:17:57.675045] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.207 [2024-11-27 19:17:57.675052] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.207 [2024-11-27 19:17:57.675062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.675069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.207 [2024-11-27 19:17:57.675076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:19:48.207 [2024-11-27 19:17:57.675083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.675160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.207 [2024-11-27 19:17:57.675170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.207 [2024-11-27 19:17:57.675176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:48.207 [2024-11-27 19:17:57.675185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.207 [2024-11-27 19:17:57.675267] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.207 [2024-11-27 19:17:57.675279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.207 [2024-11-27 19:17:57.675286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.207 [2024-11-27 19:17:57.675307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.207 [2024-11-27 19:17:57.675325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.207 [2024-11-27 19:17:57.675338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.207 [2024-11-27 19:17:57.675350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:48.207 [2024-11-27 19:17:57.675356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.207 [2024-11-27 19:17:57.675362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.207 [2024-11-27 19:17:57.675367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:48.207 [2024-11-27 19:17:57.675376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.207 [2024-11-27 19:17:57.675392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.207 [2024-11-27 19:17:57.675409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.207 [2024-11-27 19:17:57.675427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.207 [2024-11-27 19:17:57.675445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.207 [2024-11-27 19:17:57.675463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.207 [2024-11-27 19:17:57.675475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.207 [2024-11-27 19:17:57.675482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:48.207 [2024-11-27 19:17:57.675488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.207 [2024-11-27 19:17:57.675493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.207 [2024-11-27 19:17:57.675499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:48.207 [2024-11-27 19:17:57.675504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.207 [2024-11-27 19:17:57.675510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.207 [2024-11-27 19:17:57.675515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:48.207 [2024-11-27 19:17:57.675522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.208 [2024-11-27 19:17:57.675528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.208 [2024-11-27 19:17:57.675537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:48.208 [2024-11-27 19:17:57.675542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.208 [2024-11-27 19:17:57.675548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.208 [2024-11-27 19:17:57.675554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.208 [2024-11-27 19:17:57.675561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.208 [2024-11-27 19:17:57.675567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.208 [2024-11-27 19:17:57.675576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.208 [2024-11-27 19:17:57.675583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.208 [2024-11-27 19:17:57.675591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.208 [2024-11-27 19:17:57.675597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.208 [2024-11-27 19:17:57.675603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.208 [2024-11-27 19:17:57.675608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.208 [2024-11-27 19:17:57.675617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.208 [2024-11-27 19:17:57.675625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:48.208 [2024-11-27 19:17:57.675639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:48.208 [2024-11-27 19:17:57.675646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:48.208 [2024-11-27 19:17:57.675652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:48.208 [2024-11-27 19:17:57.675659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:48.208 [2024-11-27 19:17:57.675664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:48.208 [2024-11-27 19:17:57.675672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:48.208 [2024-11-27 19:17:57.675677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:48.208 [2024-11-27 19:17:57.675685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:48.208 [2024-11-27 19:17:57.675691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:48.208 [2024-11-27 19:17:57.675723] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.208 [2024-11-27 19:17:57.675729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.208 [2024-11-27 19:17:57.675744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.208 [2024-11-27 19:17:57.675751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.208 [2024-11-27 19:17:57.675757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.208 [2024-11-27 19:17:57.675765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.208 [2024-11-27 19:17:57.675770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.208 [2024-11-27 19:17:57.675777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:19:48.208 [2024-11-27 19:17:57.675782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.208 [2024-11-27 19:17:57.675810] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:48.208 [2024-11-27 19:17:57.675818] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:52.419 [2024-11-27 19:18:01.737754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.737858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:52.419 [2024-11-27 19:18:01.737883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4061.921 ms 00:19:52.419 [2024-11-27 19:18:01.737895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.775024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.775091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.419 [2024-11-27 19:18:01.775110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.843 ms 00:19:52.419 [2024-11-27 19:18:01.775139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.775290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.775305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.419 [2024-11-27 19:18:01.775323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:52.419 [2024-11-27 19:18:01.775332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.826849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.827094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.419 [2024-11-27 19:18:01.827142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.474 ms 00:19:52.419 [2024-11-27 19:18:01.827153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.827211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.827222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.419 [2024-11-27 19:18:01.827238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.419 [2024-11-27 19:18:01.827246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.827991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.828041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.419 [2024-11-27 19:18:01.828056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:19:52.419 [2024-11-27 19:18:01.828065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.828212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.828222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.419 [2024-11-27 19:18:01.828237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:19:52.419 [2024-11-27 19:18:01.828251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.846749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.846795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.419 [2024-11-27 19:18:01.846815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.469 ms 00:19:52.419 [2024-11-27 19:18:01.846833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.861647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:52.419 [2024-11-27 19:18:01.871147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.871193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.419 [2024-11-27 19:18:01.871205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.226 ms 00:19:52.419 [2024-11-27 19:18:01.871217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.971112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.971181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:52.419 [2024-11-27 19:18:01.971197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.865 ms 00:19:52.419 [2024-11-27 19:18:01.971209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.971428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.971453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.419 [2024-11-27 19:18:01.971467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:52.419 [2024-11-27 19:18:01.971480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:01.997601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:01.997658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:52.419 [2024-11-27 19:18:01.997672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:19:52.419 [2024-11-27 19:18:01.997684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:02.022668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:02.022719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:52.419 [2024-11-27 19:18:02.022733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.938 ms 00:19:52.419 [2024-11-27 19:18:02.022746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.419 [2024-11-27 19:18:02.023428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.419 [2024-11-27 19:18:02.023560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.419 [2024-11-27 19:18:02.023572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:19:52.419 [2024-11-27 19:18:02.023583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.680 [2024-11-27 19:18:02.117013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.680 [2024-11-27 19:18:02.117240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:52.680 [2024-11-27 19:18:02.117263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.388 ms 00:19:52.680 [2024-11-27 19:18:02.117275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.680 [2024-11-27 19:18:02.146102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.680 [2024-11-27 19:18:02.146170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:52.680 [2024-11-27 19:18:02.146184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.742 ms 00:19:52.680 [2024-11-27 19:18:02.146196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.680 [2024-11-27 19:18:02.172167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.680 [2024-11-27 19:18:02.172451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:52.680 [2024-11-27 19:18:02.172472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.925 ms 00:19:52.680 [2024-11-27 19:18:02.172485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.680 [2024-11-27 19:18:02.198376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.680 [2024-11-27 19:18:02.198429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.680 [2024-11-27 19:18:02.198443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.852 ms 00:19:52.680 [2024-11-27 19:18:02.198454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.681 [2024-11-27 19:18:02.198506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.681 [2024-11-27 19:18:02.198524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.681 [2024-11-27 19:18:02.198534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:52.681 [2024-11-27 19:18:02.198545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.681 [2024-11-27 19:18:02.198666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.681 [2024-11-27 19:18:02.198682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.681 [2024-11-27 19:18:02.198692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:52.681 [2024-11-27 19:18:02.198707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.681 [2024-11-27 19:18:02.200064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4537.360 ms, result 0 00:19:52.681 { 00:19:52.681 "name": "ftl0", 00:19:52.681 "uuid": "f393fd20-db8b-4346-bcfc-63929e7e7c28" 00:19:52.681 } 00:19:52.681 19:18:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:52.681 19:18:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:52.681 19:18:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:52.940 19:18:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:52.940 [2024-11-27 19:18:02.539777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:52.940 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:52.940 Zero copy mechanism will not be used. 00:19:52.940 Running I/O for 4 seconds... 00:19:55.251 1114.00 IOPS, 73.98 MiB/s [2024-11-27T19:18:05.821Z] 1086.00 IOPS, 72.12 MiB/s [2024-11-27T19:18:06.757Z] 1166.67 IOPS, 77.47 MiB/s [2024-11-27T19:18:06.757Z] 1117.25 IOPS, 74.19 MiB/s 00:19:57.122 Latency(us) 00:19:57.122 [2024-11-27T19:18:06.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.122 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:57.122 ftl0 : 4.00 1117.10 74.18 0.00 0.00 936.80 168.57 2344.17 00:19:57.122 [2024-11-27T19:18:06.757Z] =================================================================================================================== 00:19:57.122 [2024-11-27T19:18:06.757Z] Total : 1117.10 74.18 0.00 0.00 936.80 168.57 2344.17 00:19:57.122 [2024-11-27 19:18:06.548091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:57.122 { 00:19:57.122 "results": [ 00:19:57.122 { 00:19:57.122 "job": "ftl0", 00:19:57.122 "core_mask": "0x1", 00:19:57.122 "workload": "randwrite", 00:19:57.122 "status": "finished", 00:19:57.122 "queue_depth": 1, 00:19:57.122 "io_size": 69632, 00:19:57.122 "runtime": 4.001419, 00:19:57.122 "iops": 1117.1037074597787, 00:19:57.122 "mibps": 74.18266807350092, 00:19:57.122 "io_failed": 0, 00:19:57.122 "io_timeout": 0, 00:19:57.122 "avg_latency_us": 936.7958533815178, 00:19:57.122 "min_latency_us": 168.56615384615384, 00:19:57.122 "max_latency_us": 2344.1723076923076 00:19:57.122 } 00:19:57.122 ], 00:19:57.122 "core_count": 1 00:19:57.122 } 00:19:57.122 19:18:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:57.122 [2024-11-27 19:18:06.653336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:57.122 Running I/O for 4 seconds... 00:19:59.451 6902.00 IOPS, 26.96 MiB/s [2024-11-27T19:18:10.031Z] 5864.00 IOPS, 22.91 MiB/s [2024-11-27T19:18:10.972Z] 5435.00 IOPS, 21.23 MiB/s [2024-11-27T19:18:10.972Z] 5306.75 IOPS, 20.73 MiB/s 00:20:01.337 Latency(us) 00:20:01.337 [2024-11-27T19:18:10.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.337 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.337 ftl0 : 4.03 5298.20 20.70 0.00 0.00 24072.41 338.71 53235.40 00:20:01.337 [2024-11-27T19:18:10.972Z] =================================================================================================================== 00:20:01.337 [2024-11-27T19:18:10.972Z] Total : 5298.20 20.70 0.00 0.00 24072.41 0.00 53235.40 00:20:01.337 [2024-11-27 19:18:10.690605] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:01.337 { 00:20:01.337 "results": [ 00:20:01.337 { 00:20:01.337 "job": "ftl0", 00:20:01.337 "core_mask": "0x1", 00:20:01.337 "workload": "randwrite", 00:20:01.337 "status": "finished", 00:20:01.337 "queue_depth": 128, 00:20:01.337 "io_size": 4096, 00:20:01.337 "runtime": 4.028915, 00:20:01.337 "iops": 5298.200632180128, 00:20:01.337 "mibps": 20.696096219453626, 00:20:01.337 "io_failed": 0, 00:20:01.337 "io_timeout": 0, 00:20:01.337 "avg_latency_us": 24072.40619651313, 00:20:01.337 "min_latency_us": 338.7076923076923, 00:20:01.337 "max_latency_us": 53235.39692307692 00:20:01.337 } 00:20:01.337 ], 00:20:01.337 "core_count": 1 00:20:01.337 } 00:20:01.337 19:18:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:01.337 [2024-11-27 19:18:10.798951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:01.337 Running I/O for 4 seconds... 00:20:03.226 4371.00 IOPS, 17.07 MiB/s [2024-11-27T19:18:14.281Z] 4346.00 IOPS, 16.98 MiB/s [2024-11-27T19:18:14.881Z] 4359.00 IOPS, 17.03 MiB/s [2024-11-27T19:18:14.881Z] 4363.75 IOPS, 17.05 MiB/s 00:20:05.246 Latency(us) 00:20:05.246 [2024-11-27T19:18:14.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.246 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:05.246 Verification LBA range: start 0x0 length 0x1400000 00:20:05.246 ftl0 : 4.01 4381.16 17.11 0.00 0.00 29136.16 370.22 41741.39 00:20:05.246 [2024-11-27T19:18:14.881Z] =================================================================================================================== 00:20:05.246 [2024-11-27T19:18:14.881Z] Total : 4381.16 17.11 0.00 0.00 29136.16 0.00 41741.39 00:20:05.246 { 00:20:05.246 "results": [ 00:20:05.246 { 00:20:05.246 "job": "ftl0", 00:20:05.246 "core_mask": "0x1", 00:20:05.246 "workload": "verify", 00:20:05.246 "status": "finished", 00:20:05.246 "verify_range": { 00:20:05.246 "start": 0, 00:20:05.246 "length": 20971520 00:20:05.246 }, 00:20:05.246 "queue_depth": 128, 00:20:05.246 "io_size": 4096, 00:20:05.246 "runtime": 4.012863, 00:20:05.246 "iops": 4381.161280611873, 00:20:05.246 "mibps": 17.11391125239013, 00:20:05.246 "io_failed": 0, 00:20:05.246 "io_timeout": 0, 00:20:05.246 "avg_latency_us": 29136.159867689334, 00:20:05.246 "min_latency_us": 370.2153846153846, 00:20:05.246 "max_latency_us": 41741.39076923077 00:20:05.246 } 00:20:05.246 ], 00:20:05.246 "core_count": 1 00:20:05.246 } 00:20:05.246 [2024-11-27 19:18:14.833221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:05.246 19:18:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:05.507 [2024-11-27 19:18:15.049059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.507 [2024-11-27 19:18:15.049366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.507 [2024-11-27 19:18:15.049393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:05.507 [2024-11-27 19:18:15.049405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.507 [2024-11-27 19:18:15.049442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.507 [2024-11-27 19:18:15.052815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.507 [2024-11-27 19:18:15.053001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.507 [2024-11-27 19:18:15.053029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:20:05.507 [2024-11-27 19:18:15.053039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.507 [2024-11-27 19:18:15.055800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.507 [2024-11-27 19:18:15.055851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.507 [2024-11-27 19:18:15.055871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:20:05.508 [2024-11-27 19:18:15.055880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.279451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.279662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.769 [2024-11-27 19:18:15.279757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 223.543 ms 00:20:05.769 [2024-11-27 19:18:15.279771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.286087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.286151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:05.769 [2024-11-27 19:18:15.286173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:20:05.769 [2024-11-27 19:18:15.286182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.312226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.312276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.769 [2024-11-27 19:18:15.312292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.964 ms 00:20:05.769 [2024-11-27 19:18:15.312300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.331303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.331350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.769 [2024-11-27 19:18:15.331366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.946 ms 00:20:05.769 [2024-11-27 19:18:15.331375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.331543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.331557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.769 [2024-11-27 19:18:15.331574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:05.769 [2024-11-27 19:18:15.331583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.357897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.358096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:05.769 [2024-11-27 19:18:15.358144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.290 ms 00:20:05.769 [2024-11-27 19:18:15.358154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.769 [2024-11-27 19:18:15.383473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.769 [2024-11-27 19:18:15.383540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:05.769 [2024-11-27 19:18:15.383558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.056 ms 00:20:05.769 [2024-11-27 19:18:15.383567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.032 [2024-11-27 19:18:15.408100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.032 [2024-11-27 19:18:15.408160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.032 [2024-11-27 19:18:15.408175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.479 ms 00:20:06.032 [2024-11-27 19:18:15.408183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.032 [2024-11-27 19:18:15.432695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.032 [2024-11-27 19:18:15.432741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.032 [2024-11-27 19:18:15.432759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.419 ms 00:20:06.032 [2024-11-27 19:18:15.432767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.032 [2024-11-27 19:18:15.432816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.032 [2024-11-27 19:18:15.432834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.032 [2024-11-27 19:18:15.432946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.432954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.432999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.033 [2024-11-27 19:18:15.433885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.034 [2024-11-27 19:18:15.433899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f393fd20-db8b-4346-bcfc-63929e7e7c28 00:20:06.034 [2024-11-27 19:18:15.433907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.034 [2024-11-27 19:18:15.433917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.034 [2024-11-27 19:18:15.433924] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.034 [2024-11-27 19:18:15.433934] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.034 [2024-11-27 19:18:15.433941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.034 [2024-11-27 19:18:15.433951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.034 [2024-11-27 19:18:15.433960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.034 [2024-11-27 19:18:15.433971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.034 [2024-11-27 19:18:15.433977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.034 [2024-11-27 19:18:15.433988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.034 [2024-11-27 19:18:15.434002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.034 [2024-11-27 19:18:15.434014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:20:06.034 [2024-11-27 19:18:15.434023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.448854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.034 [2024-11-27 19:18:15.448896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.034 [2024-11-27 19:18:15.448911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.785 ms 00:20:06.034 [2024-11-27 19:18:15.448919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.449392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.034 [2024-11-27 19:18:15.449421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.034 [2024-11-27 19:18:15.449434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:20:06.034 [2024-11-27 19:18:15.449445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.491621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.491860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.034 [2024-11-27 19:18:15.491891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.491901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.491980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.491989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.034 [2024-11-27 19:18:15.492002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.492014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.492154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.492168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.034 [2024-11-27 19:18:15.492179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.492188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.492207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.492217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.034 [2024-11-27 19:18:15.492228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.492236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.585324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.585388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.034 [2024-11-27 19:18:15.585408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.585418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.660831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.660896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.034 [2024-11-27 19:18:15.660912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.660925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.034 [2024-11-27 19:18:15.661060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.034 [2024-11-27 19:18:15.661204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.034 [2024-11-27 19:18:15.661359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.034 [2024-11-27 19:18:15.661429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.034 [2024-11-27 19:18:15.661519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.034 [2024-11-27 19:18:15.661617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.034 [2024-11-27 19:18:15.661629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.034 [2024-11-27 19:18:15.661638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.034 [2024-11-27 19:18:15.661812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 612.687 ms, result 0 00:20:06.295 true 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76051 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76051 ']' 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76051 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76051 00:20:06.296 killing process with pid 76051 00:20:06.296 Received shutdown signal, test time was about 4.000000 seconds 00:20:06.296 00:20:06.296 Latency(us) 00:20:06.296 [2024-11-27T19:18:15.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.296 [2024-11-27T19:18:15.931Z] =================================================================================================================== 00:20:06.296 [2024-11-27T19:18:15.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76051' 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76051 00:20:06.296 19:18:15 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76051 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:11.586 Remove shared memory files 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:11.586 ************************************ 00:20:11.586 END TEST ftl_bdevperf 00:20:11.586 ************************************ 00:20:11.586 00:20:11.586 real 0m27.320s 00:20:11.586 user 0m29.745s 00:20:11.586 sys 0m1.135s 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.586 19:18:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:11.586 19:18:21 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:11.586 19:18:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:11.586 19:18:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.586 19:18:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:11.586 ************************************ 00:20:11.586 START TEST ftl_trim 00:20:11.586 ************************************ 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:11.586 * Looking for test storage... 00:20:11.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.586 19:18:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.586 --rc genhtml_branch_coverage=1 00:20:11.586 --rc genhtml_function_coverage=1 00:20:11.586 --rc genhtml_legend=1 00:20:11.586 --rc geninfo_all_blocks=1 00:20:11.586 --rc geninfo_unexecuted_blocks=1 00:20:11.586 00:20:11.586 ' 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.586 --rc genhtml_branch_coverage=1 00:20:11.586 --rc genhtml_function_coverage=1 00:20:11.586 --rc genhtml_legend=1 00:20:11.586 --rc geninfo_all_blocks=1 00:20:11.586 --rc geninfo_unexecuted_blocks=1 00:20:11.586 00:20:11.586 ' 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.586 --rc genhtml_branch_coverage=1 00:20:11.586 --rc genhtml_function_coverage=1 00:20:11.586 --rc genhtml_legend=1 00:20:11.586 --rc geninfo_all_blocks=1 00:20:11.586 --rc geninfo_unexecuted_blocks=1 00:20:11.586 00:20:11.586 ' 00:20:11.586 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.586 --rc genhtml_branch_coverage=1 00:20:11.586 --rc genhtml_function_coverage=1 00:20:11.586 --rc genhtml_legend=1 00:20:11.586 --rc geninfo_all_blocks=1 00:20:11.586 --rc geninfo_unexecuted_blocks=1 00:20:11.586 00:20:11.586 ' 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:11.586 19:18:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76409 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:11.587 19:18:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76409 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76409 ']' 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.587 19:18:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:11.847 [2024-11-27 19:18:21.282783] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:11.847 [2024-11-27 19:18:21.283096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76409 ] 00:20:11.847 [2024-11-27 19:18:21.450725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.108 [2024-11-27 19:18:21.593037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.108 [2024-11-27 19:18:21.593368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.108 [2024-11-27 19:18:21.593407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.054 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.054 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:13.054 19:18:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:13.317 19:18:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:13.317 19:18:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:13.317 19:18:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:13.317 { 00:20:13.317 "name": "nvme0n1", 00:20:13.317 "aliases": [ 00:20:13.317 "e9307367-0156-4cc4-bf4c-c13d7089f348" 00:20:13.317 ], 00:20:13.317 "product_name": "NVMe disk", 00:20:13.317 "block_size": 4096, 00:20:13.317 "num_blocks": 1310720, 00:20:13.317 "uuid": "e9307367-0156-4cc4-bf4c-c13d7089f348", 00:20:13.317 "numa_id": -1, 00:20:13.317 "assigned_rate_limits": { 00:20:13.317 "rw_ios_per_sec": 0, 00:20:13.317 "rw_mbytes_per_sec": 0, 00:20:13.317 "r_mbytes_per_sec": 0, 00:20:13.317 "w_mbytes_per_sec": 0 00:20:13.317 }, 00:20:13.317 "claimed": true, 00:20:13.317 "claim_type": "read_many_write_one", 00:20:13.317 "zoned": false, 00:20:13.317 "supported_io_types": { 00:20:13.317 "read": true, 00:20:13.317 "write": true, 00:20:13.317 "unmap": true, 00:20:13.317 "flush": true, 00:20:13.317 "reset": true, 00:20:13.317 "nvme_admin": true, 00:20:13.317 "nvme_io": true, 00:20:13.317 "nvme_io_md": false, 00:20:13.317 "write_zeroes": true, 00:20:13.317 "zcopy": false, 00:20:13.317 "get_zone_info": false, 00:20:13.317 "zone_management": false, 00:20:13.317 "zone_append": false, 00:20:13.317 "compare": true, 00:20:13.317 "compare_and_write": false, 00:20:13.317 "abort": true, 00:20:13.317 "seek_hole": false, 00:20:13.317 "seek_data": false, 00:20:13.317 "copy": true, 00:20:13.317 "nvme_iov_md": false 00:20:13.317 }, 00:20:13.317 "driver_specific": { 00:20:13.317 "nvme": [ 00:20:13.317 { 00:20:13.317 "pci_address": "0000:00:11.0", 00:20:13.317 "trid": { 00:20:13.317 "trtype": "PCIe", 00:20:13.317 "traddr": "0000:00:11.0" 00:20:13.317 }, 00:20:13.317 "ctrlr_data": { 00:20:13.317 "cntlid": 0, 00:20:13.317 "vendor_id": "0x1b36", 00:20:13.317 "model_number": "QEMU NVMe Ctrl", 00:20:13.317 "serial_number": "12341", 00:20:13.317 "firmware_revision": "8.0.0", 00:20:13.317 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:13.317 "oacs": { 00:20:13.317 "security": 0, 00:20:13.317 "format": 1, 00:20:13.317 "firmware": 0, 00:20:13.317 "ns_manage": 1 00:20:13.317 }, 00:20:13.317 "multi_ctrlr": false, 00:20:13.317 "ana_reporting": false 00:20:13.317 }, 00:20:13.317 "vs": { 00:20:13.317 "nvme_version": "1.4" 00:20:13.317 }, 00:20:13.317 "ns_data": { 00:20:13.317 "id": 1, 00:20:13.317 "can_share": false 00:20:13.317 } 00:20:13.317 } 00:20:13.317 ], 00:20:13.317 "mp_policy": "active_passive" 00:20:13.317 } 00:20:13.317 } 00:20:13.317 ]' 00:20:13.317 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:13.579 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:13.579 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:13.579 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:13.579 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:13.579 19:18:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:13.579 19:18:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:13.579 19:18:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:13.579 19:18:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:13.579 19:18:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:13.579 19:18:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:13.837 19:18:23 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=55032bee-1879-4b2e-a300-35ff347395ce 00:20:13.837 19:18:23 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:13.837 19:18:23 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55032bee-1879-4b2e-a300-35ff347395ce 00:20:13.837 19:18:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:14.095 19:18:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d88d8a72-af28-41bc-9926-88ccc7a3a5d2 00:20:14.095 19:18:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d88d8a72-af28-41bc-9926-88ccc7a3a5d2 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:14.352 19:18:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.352 19:18:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.352 19:18:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:14.352 19:18:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:14.352 19:18:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:14.352 19:18:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:14.611 { 00:20:14.611 "name": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:14.611 "aliases": [ 00:20:14.611 "lvs/nvme0n1p0" 00:20:14.611 ], 00:20:14.611 "product_name": "Logical Volume", 00:20:14.611 "block_size": 4096, 00:20:14.611 "num_blocks": 26476544, 00:20:14.611 "uuid": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:14.611 "assigned_rate_limits": { 00:20:14.611 "rw_ios_per_sec": 0, 00:20:14.611 "rw_mbytes_per_sec": 0, 00:20:14.611 "r_mbytes_per_sec": 0, 00:20:14.611 "w_mbytes_per_sec": 0 00:20:14.611 }, 00:20:14.611 "claimed": false, 00:20:14.611 "zoned": false, 00:20:14.611 "supported_io_types": { 00:20:14.611 "read": true, 00:20:14.611 "write": true, 00:20:14.611 "unmap": true, 00:20:14.611 "flush": false, 00:20:14.611 "reset": true, 00:20:14.611 "nvme_admin": false, 00:20:14.611 "nvme_io": false, 00:20:14.611 "nvme_io_md": false, 00:20:14.611 "write_zeroes": true, 00:20:14.611 "zcopy": false, 00:20:14.611 "get_zone_info": false, 00:20:14.611 "zone_management": false, 00:20:14.611 "zone_append": false, 00:20:14.611 "compare": false, 00:20:14.611 "compare_and_write": false, 00:20:14.611 "abort": false, 00:20:14.611 "seek_hole": true, 00:20:14.611 "seek_data": true, 00:20:14.611 "copy": false, 00:20:14.611 "nvme_iov_md": false 00:20:14.611 }, 00:20:14.611 "driver_specific": { 00:20:14.611 "lvol": { 00:20:14.611 "lvol_store_uuid": "d88d8a72-af28-41bc-9926-88ccc7a3a5d2", 00:20:14.611 "base_bdev": "nvme0n1", 00:20:14.611 "thin_provision": true, 00:20:14.611 "num_allocated_clusters": 0, 00:20:14.611 "snapshot": false, 00:20:14.611 "clone": false, 00:20:14.611 "esnap_clone": false 00:20:14.611 } 00:20:14.611 } 00:20:14.611 } 00:20:14.611 ]' 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:14.611 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:14.611 19:18:24 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:14.611 19:18:24 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:14.611 19:18:24 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:14.869 19:18:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:14.869 19:18:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:14.869 19:18:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.869 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:14.869 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:14.869 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:14.869 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:14.869 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:15.128 { 00:20:15.128 "name": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:15.128 "aliases": [ 00:20:15.128 "lvs/nvme0n1p0" 00:20:15.128 ], 00:20:15.128 "product_name": "Logical Volume", 00:20:15.128 "block_size": 4096, 00:20:15.128 "num_blocks": 26476544, 00:20:15.128 "uuid": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:15.128 "assigned_rate_limits": { 00:20:15.128 "rw_ios_per_sec": 0, 00:20:15.128 "rw_mbytes_per_sec": 0, 00:20:15.128 "r_mbytes_per_sec": 0, 00:20:15.128 "w_mbytes_per_sec": 0 00:20:15.128 }, 00:20:15.128 "claimed": false, 00:20:15.128 "zoned": false, 00:20:15.128 "supported_io_types": { 00:20:15.128 "read": true, 00:20:15.128 "write": true, 00:20:15.128 "unmap": true, 00:20:15.128 "flush": false, 00:20:15.128 "reset": true, 00:20:15.128 "nvme_admin": false, 00:20:15.128 "nvme_io": false, 00:20:15.128 "nvme_io_md": false, 00:20:15.128 "write_zeroes": true, 00:20:15.128 "zcopy": false, 00:20:15.128 "get_zone_info": false, 00:20:15.128 "zone_management": false, 00:20:15.128 "zone_append": false, 00:20:15.128 "compare": false, 00:20:15.128 "compare_and_write": false, 00:20:15.128 "abort": false, 00:20:15.128 "seek_hole": true, 00:20:15.128 "seek_data": true, 00:20:15.128 "copy": false, 00:20:15.128 "nvme_iov_md": false 00:20:15.128 }, 00:20:15.128 "driver_specific": { 00:20:15.128 "lvol": { 00:20:15.128 "lvol_store_uuid": "d88d8a72-af28-41bc-9926-88ccc7a3a5d2", 00:20:15.128 "base_bdev": "nvme0n1", 00:20:15.128 "thin_provision": true, 00:20:15.128 "num_allocated_clusters": 0, 00:20:15.128 "snapshot": false, 00:20:15.128 "clone": false, 00:20:15.128 "esnap_clone": false 00:20:15.128 } 00:20:15.128 } 00:20:15.128 } 00:20:15.128 ]' 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:15.128 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:15.128 19:18:24 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:15.128 19:18:24 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:15.387 19:18:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:15.387 19:18:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:15.387 19:18:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:15.387 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:15.387 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:15.387 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:15.387 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:15.387 19:18:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:15.646 { 00:20:15.646 "name": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:15.646 "aliases": [ 00:20:15.646 "lvs/nvme0n1p0" 00:20:15.646 ], 00:20:15.646 "product_name": "Logical Volume", 00:20:15.646 "block_size": 4096, 00:20:15.646 "num_blocks": 26476544, 00:20:15.646 "uuid": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:15.646 "assigned_rate_limits": { 00:20:15.646 "rw_ios_per_sec": 0, 00:20:15.646 "rw_mbytes_per_sec": 0, 00:20:15.646 "r_mbytes_per_sec": 0, 00:20:15.646 "w_mbytes_per_sec": 0 00:20:15.646 }, 00:20:15.646 "claimed": false, 00:20:15.646 "zoned": false, 00:20:15.646 "supported_io_types": { 00:20:15.646 "read": true, 00:20:15.646 "write": true, 00:20:15.646 "unmap": true, 00:20:15.646 "flush": false, 00:20:15.646 "reset": true, 00:20:15.646 "nvme_admin": false, 00:20:15.646 "nvme_io": false, 00:20:15.646 "nvme_io_md": false, 00:20:15.646 "write_zeroes": true, 00:20:15.646 "zcopy": false, 00:20:15.646 "get_zone_info": false, 00:20:15.646 "zone_management": false, 00:20:15.646 "zone_append": false, 00:20:15.646 "compare": false, 00:20:15.646 "compare_and_write": false, 00:20:15.646 "abort": false, 00:20:15.646 "seek_hole": true, 00:20:15.646 "seek_data": true, 00:20:15.646 "copy": false, 00:20:15.646 "nvme_iov_md": false 00:20:15.646 }, 00:20:15.646 "driver_specific": { 00:20:15.646 "lvol": { 00:20:15.646 "lvol_store_uuid": "d88d8a72-af28-41bc-9926-88ccc7a3a5d2", 00:20:15.646 "base_bdev": "nvme0n1", 00:20:15.646 "thin_provision": true, 00:20:15.646 "num_allocated_clusters": 0, 00:20:15.646 "snapshot": false, 00:20:15.646 "clone": false, 00:20:15.646 "esnap_clone": false 00:20:15.646 } 00:20:15.646 } 00:20:15.646 } 00:20:15.646 ]' 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:15.646 19:18:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:15.646 19:18:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:15.646 19:18:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f105b3a0-1a89-4fd2-b5b8-6639c92d31c5 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:15.907 [2024-11-27 19:18:25.312216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.907 [2024-11-27 19:18:25.312259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.907 [2024-11-27 19:18:25.312275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.908 [2024-11-27 19:18:25.312282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.314597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.314627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.908 [2024-11-27 19:18:25.314650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:20:15.908 [2024-11-27 19:18:25.314656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.314732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.908 [2024-11-27 19:18:25.315311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.908 [2024-11-27 19:18:25.315421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.315430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.908 [2024-11-27 19:18:25.315439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:20:15.908 [2024-11-27 19:18:25.315445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.315604] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:20:15.908 [2024-11-27 19:18:25.316830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.316860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:15.908 [2024-11-27 19:18:25.316869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:15.908 [2024-11-27 19:18:25.316878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.323623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.323651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.908 [2024-11-27 19:18:25.323659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:20:15.908 [2024-11-27 19:18:25.323669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.323769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.323779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.908 [2024-11-27 19:18:25.323786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:15.908 [2024-11-27 19:18:25.323796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.323829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.323837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.908 [2024-11-27 19:18:25.323844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:15.908 [2024-11-27 19:18:25.323853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.323882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:15.908 [2024-11-27 19:18:25.327080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.327188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.908 [2024-11-27 19:18:25.327206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:20:15.908 [2024-11-27 19:18:25.327212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.327260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.327279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.908 [2024-11-27 19:18:25.327287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.908 [2024-11-27 19:18:25.327293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.327320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:15.908 [2024-11-27 19:18:25.327428] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.908 [2024-11-27 19:18:25.327442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.908 [2024-11-27 19:18:25.327451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.908 [2024-11-27 19:18:25.327461] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327468] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:15.908 [2024-11-27 19:18:25.327482] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.908 [2024-11-27 19:18:25.327491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.908 [2024-11-27 19:18:25.327497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.908 [2024-11-27 19:18:25.327505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.327511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.908 [2024-11-27 19:18:25.327518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:20:15.908 [2024-11-27 19:18:25.327524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.327605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.908 [2024-11-27 19:18:25.327611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.908 [2024-11-27 19:18:25.327618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:15.908 [2024-11-27 19:18:25.327624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.908 [2024-11-27 19:18:25.327728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.908 [2024-11-27 19:18:25.327736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.908 [2024-11-27 19:18:25.327744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.908 [2024-11-27 19:18:25.327761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.908 [2024-11-27 19:18:25.327780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.908 [2024-11-27 19:18:25.327791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.908 [2024-11-27 19:18:25.327796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:15.908 [2024-11-27 19:18:25.327803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.908 [2024-11-27 19:18:25.327808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.908 [2024-11-27 19:18:25.327815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:15.908 [2024-11-27 19:18:25.327820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.908 [2024-11-27 19:18:25.327833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.908 [2024-11-27 19:18:25.327853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.908 [2024-11-27 19:18:25.327871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.908 [2024-11-27 19:18:25.327890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.908 [2024-11-27 19:18:25.327906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.908 [2024-11-27 19:18:25.327917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.908 [2024-11-27 19:18:25.327925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.908 [2024-11-27 19:18:25.327937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.908 [2024-11-27 19:18:25.327942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:15.908 [2024-11-27 19:18:25.327948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.908 [2024-11-27 19:18:25.327953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.908 [2024-11-27 19:18:25.327959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:15.908 [2024-11-27 19:18:25.327964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.908 [2024-11-27 19:18:25.327976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:15.908 [2024-11-27 19:18:25.327982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.327987] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.908 [2024-11-27 19:18:25.327994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.908 [2024-11-27 19:18:25.328000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.908 [2024-11-27 19:18:25.328007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.908 [2024-11-27 19:18:25.328013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.908 [2024-11-27 19:18:25.328022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.909 [2024-11-27 19:18:25.328027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.909 [2024-11-27 19:18:25.328035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.909 [2024-11-27 19:18:25.328040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.909 [2024-11-27 19:18:25.328047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.909 [2024-11-27 19:18:25.328055] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.909 [2024-11-27 19:18:25.328064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:15.909 [2024-11-27 19:18:25.328082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:15.909 [2024-11-27 19:18:25.328087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:15.909 [2024-11-27 19:18:25.328094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:15.909 [2024-11-27 19:18:25.328100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:15.909 [2024-11-27 19:18:25.328107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:15.909 [2024-11-27 19:18:25.328112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:15.909 [2024-11-27 19:18:25.328119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:15.909 [2024-11-27 19:18:25.328135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:15.909 [2024-11-27 19:18:25.328144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:15.909 [2024-11-27 19:18:25.328175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.909 [2024-11-27 19:18:25.328183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.909 [2024-11-27 19:18:25.328196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.909 [2024-11-27 19:18:25.328201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.909 [2024-11-27 19:18:25.328209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.909 [2024-11-27 19:18:25.328215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.909 [2024-11-27 19:18:25.328224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.909 [2024-11-27 19:18:25.328230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:20:15.909 [2024-11-27 19:18:25.328245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.909 [2024-11-27 19:18:25.328331] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:15.909 [2024-11-27 19:18:25.328342] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:18.458 [2024-11-27 19:18:27.823977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.824049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:18.458 [2024-11-27 19:18:27.824066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2495.635 ms 00:20:18.458 [2024-11-27 19:18:27.824077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.852405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.852454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.458 [2024-11-27 19:18:27.852468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.061 ms 00:20:18.458 [2024-11-27 19:18:27.852477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.852612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.852624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.458 [2024-11-27 19:18:27.852649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:18.458 [2024-11-27 19:18:27.852664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.892525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.892572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.458 [2024-11-27 19:18:27.892585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.820 ms 00:20:18.458 [2024-11-27 19:18:27.892596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.892692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.892705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.458 [2024-11-27 19:18:27.892715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:18.458 [2024-11-27 19:18:27.892724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.893151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.893170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.458 [2024-11-27 19:18:27.893180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:20:18.458 [2024-11-27 19:18:27.893190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.893304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.893315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.458 [2024-11-27 19:18:27.893337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:18.458 [2024-11-27 19:18:27.893349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.909268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.909301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.458 [2024-11-27 19:18:27.909311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.880 ms 00:20:18.458 [2024-11-27 19:18:27.909321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:27.921540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.458 [2024-11-27 19:18:27.939208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:27.939362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.458 [2024-11-27 19:18:27.939382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.779 ms 00:20:18.458 [2024-11-27 19:18:27.939390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:28.015532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:28.015569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:18.458 [2024-11-27 19:18:28.015584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.068 ms 00:20:18.458 [2024-11-27 19:18:28.015592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:28.015811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:28.015823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.458 [2024-11-27 19:18:28.015836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:20:18.458 [2024-11-27 19:18:28.015844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:28.038733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:28.038869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:18.458 [2024-11-27 19:18:28.039035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.855 ms 00:20:18.458 [2024-11-27 19:18:28.039047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:28.061817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:28.061928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:18.458 [2024-11-27 19:18:28.061947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:20:18.458 [2024-11-27 19:18:28.061954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.458 [2024-11-27 19:18:28.062570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.458 [2024-11-27 19:18:28.062588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.458 [2024-11-27 19:18:28.062599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:20:18.458 [2024-11-27 19:18:28.062607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.133718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.133753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:18.717 [2024-11-27 19:18:28.133768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.068 ms 00:20:18.717 [2024-11-27 19:18:28.133776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.158206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.158238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:18.717 [2024-11-27 19:18:28.158251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:20:18.717 [2024-11-27 19:18:28.158259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.180791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.180913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:18.717 [2024-11-27 19:18:28.180932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.472 ms 00:20:18.717 [2024-11-27 19:18:28.180939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.205004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.205047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.717 [2024-11-27 19:18:28.205059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.998 ms 00:20:18.717 [2024-11-27 19:18:28.205066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.205157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.205169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.717 [2024-11-27 19:18:28.205182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.717 [2024-11-27 19:18:28.205190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.205268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.717 [2024-11-27 19:18:28.205277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.717 [2024-11-27 19:18:28.205287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:18.717 [2024-11-27 19:18:28.205295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.717 [2024-11-27 19:18:28.206172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.717 [2024-11-27 19:18:28.209271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2893.635 ms, result 0 00:20:18.717 [2024-11-27 19:18:28.210342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.717 { 00:20:18.717 "name": "ftl0", 00:20:18.717 "uuid": "8b2cde32-d394-44a9-aa28-c467f1c4b33d" 00:20:18.717 } 00:20:18.717 19:18:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.717 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:18.976 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:18.976 [ 00:20:18.976 { 00:20:18.976 "name": "ftl0", 00:20:18.976 "aliases": [ 00:20:18.976 "8b2cde32-d394-44a9-aa28-c467f1c4b33d" 00:20:18.976 ], 00:20:18.976 "product_name": "FTL disk", 00:20:18.976 "block_size": 4096, 00:20:18.976 "num_blocks": 23592960, 00:20:18.976 "uuid": "8b2cde32-d394-44a9-aa28-c467f1c4b33d", 00:20:18.976 "assigned_rate_limits": { 00:20:18.976 "rw_ios_per_sec": 0, 00:20:18.976 "rw_mbytes_per_sec": 0, 00:20:18.976 "r_mbytes_per_sec": 0, 00:20:18.976 "w_mbytes_per_sec": 0 00:20:18.976 }, 00:20:18.976 "claimed": false, 00:20:18.976 "zoned": false, 00:20:18.976 "supported_io_types": { 00:20:18.976 "read": true, 00:20:18.976 "write": true, 00:20:18.976 "unmap": true, 00:20:18.976 "flush": true, 00:20:18.976 "reset": false, 00:20:18.976 "nvme_admin": false, 00:20:18.976 "nvme_io": false, 00:20:18.976 "nvme_io_md": false, 00:20:18.976 "write_zeroes": true, 00:20:18.976 "zcopy": false, 00:20:18.976 "get_zone_info": false, 00:20:18.976 "zone_management": false, 00:20:18.976 "zone_append": false, 00:20:18.976 "compare": false, 00:20:18.976 "compare_and_write": false, 00:20:18.976 "abort": false, 00:20:18.976 "seek_hole": false, 00:20:18.976 "seek_data": false, 00:20:18.976 "copy": false, 00:20:18.976 "nvme_iov_md": false 00:20:18.976 }, 00:20:18.976 "driver_specific": { 00:20:18.976 "ftl": { 00:20:18.976 "base_bdev": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:18.976 "cache": "nvc0n1p0" 00:20:18.976 } 00:20:18.976 } 00:20:18.976 } 00:20:18.976 ] 00:20:19.234 19:18:28 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:19.234 19:18:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:19.234 19:18:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:19.234 19:18:28 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:19.234 19:18:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:19.492 19:18:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:19.493 { 00:20:19.493 "name": "ftl0", 00:20:19.493 "aliases": [ 00:20:19.493 "8b2cde32-d394-44a9-aa28-c467f1c4b33d" 00:20:19.493 ], 00:20:19.493 "product_name": "FTL disk", 00:20:19.493 "block_size": 4096, 00:20:19.493 "num_blocks": 23592960, 00:20:19.493 "uuid": "8b2cde32-d394-44a9-aa28-c467f1c4b33d", 00:20:19.493 "assigned_rate_limits": { 00:20:19.493 "rw_ios_per_sec": 0, 00:20:19.493 "rw_mbytes_per_sec": 0, 00:20:19.493 "r_mbytes_per_sec": 0, 00:20:19.493 "w_mbytes_per_sec": 0 00:20:19.493 }, 00:20:19.493 "claimed": false, 00:20:19.493 "zoned": false, 00:20:19.493 "supported_io_types": { 00:20:19.493 "read": true, 00:20:19.493 "write": true, 00:20:19.493 "unmap": true, 00:20:19.493 "flush": true, 00:20:19.493 "reset": false, 00:20:19.493 "nvme_admin": false, 00:20:19.493 "nvme_io": false, 00:20:19.493 "nvme_io_md": false, 00:20:19.493 "write_zeroes": true, 00:20:19.493 "zcopy": false, 00:20:19.493 "get_zone_info": false, 00:20:19.493 "zone_management": false, 00:20:19.493 "zone_append": false, 00:20:19.493 "compare": false, 00:20:19.493 "compare_and_write": false, 00:20:19.493 "abort": false, 00:20:19.493 "seek_hole": false, 00:20:19.493 "seek_data": false, 00:20:19.493 "copy": false, 00:20:19.493 "nvme_iov_md": false 00:20:19.493 }, 00:20:19.493 "driver_specific": { 00:20:19.493 "ftl": { 00:20:19.493 "base_bdev": "f105b3a0-1a89-4fd2-b5b8-6639c92d31c5", 00:20:19.493 "cache": "nvc0n1p0" 00:20:19.493 } 00:20:19.493 } 00:20:19.493 } 00:20:19.493 ]' 00:20:19.493 19:18:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:19.493 19:18:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:19.493 19:18:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:19.762 [2024-11-27 19:18:29.245550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.245694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.762 [2024-11-27 19:18:29.245713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.762 [2024-11-27 19:18:29.245722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.245756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:19.762 [2024-11-27 19:18:29.248024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.248048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.762 [2024-11-27 19:18:29.248063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:20:19.762 [2024-11-27 19:18:29.248070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.248554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.248570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.762 [2024-11-27 19:18:29.248579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:20:19.762 [2024-11-27 19:18:29.248585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.251340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.251355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.762 [2024-11-27 19:18:29.251365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:20:19.762 [2024-11-27 19:18:29.251372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.256657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.256680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.762 [2024-11-27 19:18:29.256690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:20:19.762 [2024-11-27 19:18:29.256696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.275486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.275583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.762 [2024-11-27 19:18:29.275601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:20:19.762 [2024-11-27 19:18:29.275607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.288456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.288485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.762 [2024-11-27 19:18:29.288499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:20:19.762 [2024-11-27 19:18:29.288505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.288686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.288695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.762 [2024-11-27 19:18:29.288704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:19.762 [2024-11-27 19:18:29.288709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.306354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.306448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.762 [2024-11-27 19:18:29.306463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.619 ms 00:20:19.762 [2024-11-27 19:18:29.306469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.323762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.323785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.762 [2024-11-27 19:18:29.323797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:20:19.762 [2024-11-27 19:18:29.323802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.341244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.341332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.762 [2024-11-27 19:18:29.341346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.389 ms 00:20:19.762 [2024-11-27 19:18:29.341352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.358167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.762 [2024-11-27 19:18:29.358193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.762 [2024-11-27 19:18:29.358202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.726 ms 00:20:19.762 [2024-11-27 19:18:29.358207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.762 [2024-11-27 19:18:29.358257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.762 [2024-11-27 19:18:29.358269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.762 [2024-11-27 19:18:29.358520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.763 [2024-11-27 19:18:29.358962] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.763 [2024-11-27 19:18:29.358970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:20:19.763 [2024-11-27 19:18:29.358976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.763 [2024-11-27 19:18:29.358983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.763 [2024-11-27 19:18:29.358991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.763 [2024-11-27 19:18:29.358998] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.763 [2024-11-27 19:18:29.359003] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.763 [2024-11-27 19:18:29.359010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.763 [2024-11-27 19:18:29.359016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.763 [2024-11-27 19:18:29.359021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.763 [2024-11-27 19:18:29.359026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.763 [2024-11-27 19:18:29.359033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.763 [2024-11-27 19:18:29.359039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.763 [2024-11-27 19:18:29.359047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:20:19.763 [2024-11-27 19:18:29.359052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.763 [2024-11-27 19:18:29.369101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.763 [2024-11-27 19:18:29.369199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.763 [2024-11-27 19:18:29.369215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.023 ms 00:20:19.763 [2024-11-27 19:18:29.369221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.763 [2024-11-27 19:18:29.369541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.763 [2024-11-27 19:18:29.369549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.763 [2024-11-27 19:18:29.369557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:19.763 [2024-11-27 19:18:29.369562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.021 [2024-11-27 19:18:29.405821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.021 [2024-11-27 19:18:29.405850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.021 [2024-11-27 19:18:29.405861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.021 [2024-11-27 19:18:29.405867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.021 [2024-11-27 19:18:29.405949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.021 [2024-11-27 19:18:29.405958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.022 [2024-11-27 19:18:29.405966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.405972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.406025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.406035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.022 [2024-11-27 19:18:29.406045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.406051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.406082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.406088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.022 [2024-11-27 19:18:29.406095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.406101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.471588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.471624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.022 [2024-11-27 19:18:29.471635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.471642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.022 [2024-11-27 19:18:29.522374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.022 [2024-11-27 19:18:29.522487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.022 [2024-11-27 19:18:29.522559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.022 [2024-11-27 19:18:29.522685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.022 [2024-11-27 19:18:29.522755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.022 [2024-11-27 19:18:29.522826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.522884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.022 [2024-11-27 19:18:29.522891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.022 [2024-11-27 19:18:29.522899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.022 [2024-11-27 19:18:29.522905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.022 [2024-11-27 19:18:29.523078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.511 ms, result 0 00:20:20.022 true 00:20:20.022 19:18:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76409 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76409 ']' 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76409 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76409 00:20:20.022 killing process with pid 76409 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76409' 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76409 00:20:20.022 19:18:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76409 00:20:26.604 19:18:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:27.547 65536+0 records in 00:20:27.547 65536+0 records out 00:20:27.547 268435456 bytes (268 MB, 256 MiB) copied, 1.0783 s, 249 MB/s 00:20:27.547 19:18:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:27.547 [2024-11-27 19:18:36.979564] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:27.547 [2024-11-27 19:18:36.979713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76585 ] 00:20:27.547 [2024-11-27 19:18:37.143184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.809 [2024-11-27 19:18:37.268867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.070 [2024-11-27 19:18:37.563672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.070 [2024-11-27 19:18:37.563761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.331 [2024-11-27 19:18:37.722071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.722300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.331 [2024-11-27 19:18:37.722321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.331 [2024-11-27 19:18:37.722330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.725065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.725103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.331 [2024-11-27 19:18:37.725113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:20:28.331 [2024-11-27 19:18:37.725131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.725222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.331 [2024-11-27 19:18:37.725891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.331 [2024-11-27 19:18:37.725915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.725923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.331 [2024-11-27 19:18:37.725933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:20:28.331 [2024-11-27 19:18:37.725940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.727751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:28.331 [2024-11-27 19:18:37.740852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.741014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:28.331 [2024-11-27 19:18:37.741033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:20:28.331 [2024-11-27 19:18:37.741041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.741149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.741161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:28.331 [2024-11-27 19:18:37.741171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:28.331 [2024-11-27 19:18:37.741178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.746864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.746896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.331 [2024-11-27 19:18:37.746905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.644 ms 00:20:28.331 [2024-11-27 19:18:37.746913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.747003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.747012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.331 [2024-11-27 19:18:37.747020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:28.331 [2024-11-27 19:18:37.747028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.747055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.747066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.331 [2024-11-27 19:18:37.747074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:28.331 [2024-11-27 19:18:37.747081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.747102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:28.331 [2024-11-27 19:18:37.750594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.750743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.331 [2024-11-27 19:18:37.750759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:20:28.331 [2024-11-27 19:18:37.750767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.750805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.331 [2024-11-27 19:18:37.750814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.331 [2024-11-27 19:18:37.750822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:28.331 [2024-11-27 19:18:37.750829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.331 [2024-11-27 19:18:37.750851] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:28.331 [2024-11-27 19:18:37.750869] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:28.331 [2024-11-27 19:18:37.750903] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:28.331 [2024-11-27 19:18:37.750918] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:28.331 [2024-11-27 19:18:37.751020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.331 [2024-11-27 19:18:37.751030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.331 [2024-11-27 19:18:37.751040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:28.331 [2024-11-27 19:18:37.751054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751071] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:28.332 [2024-11-27 19:18:37.751079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.332 [2024-11-27 19:18:37.751087] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.332 [2024-11-27 19:18:37.751094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.332 [2024-11-27 19:18:37.751102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.332 [2024-11-27 19:18:37.751109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.332 [2024-11-27 19:18:37.751117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:20:28.332 [2024-11-27 19:18:37.751142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.332 [2024-11-27 19:18:37.751257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.332 [2024-11-27 19:18:37.751270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.332 [2024-11-27 19:18:37.751278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:28.332 [2024-11-27 19:18:37.751286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.332 [2024-11-27 19:18:37.751386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.332 [2024-11-27 19:18:37.751396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.332 [2024-11-27 19:18:37.751405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.332 [2024-11-27 19:18:37.751429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.332 [2024-11-27 19:18:37.751451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.332 [2024-11-27 19:18:37.751467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.332 [2024-11-27 19:18:37.751480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:28.332 [2024-11-27 19:18:37.751487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.332 [2024-11-27 19:18:37.751494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.332 [2024-11-27 19:18:37.751501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:28.332 [2024-11-27 19:18:37.751508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.332 [2024-11-27 19:18:37.751520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.332 [2024-11-27 19:18:37.751541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.332 [2024-11-27 19:18:37.751560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.332 [2024-11-27 19:18:37.751579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.332 [2024-11-27 19:18:37.751599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.332 [2024-11-27 19:18:37.751617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.332 [2024-11-27 19:18:37.751630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.332 [2024-11-27 19:18:37.751636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:28.332 [2024-11-27 19:18:37.751643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.332 [2024-11-27 19:18:37.751650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.332 [2024-11-27 19:18:37.751656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:28.332 [2024-11-27 19:18:37.751662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.332 [2024-11-27 19:18:37.751674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:28.332 [2024-11-27 19:18:37.751681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751688] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.332 [2024-11-27 19:18:37.751695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.332 [2024-11-27 19:18:37.751705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.332 [2024-11-27 19:18:37.751720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.332 [2024-11-27 19:18:37.751726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.332 [2024-11-27 19:18:37.751733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.332 [2024-11-27 19:18:37.751739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.332 [2024-11-27 19:18:37.751746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.332 [2024-11-27 19:18:37.751753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.332 [2024-11-27 19:18:37.751761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.332 [2024-11-27 19:18:37.751770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:28.332 [2024-11-27 19:18:37.751785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:28.332 [2024-11-27 19:18:37.751792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:28.332 [2024-11-27 19:18:37.751799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:28.332 [2024-11-27 19:18:37.751805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:28.332 [2024-11-27 19:18:37.751813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:28.332 [2024-11-27 19:18:37.751819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:28.332 [2024-11-27 19:18:37.751826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:28.332 [2024-11-27 19:18:37.751833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:28.332 [2024-11-27 19:18:37.751839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:28.332 [2024-11-27 19:18:37.751873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.332 [2024-11-27 19:18:37.751880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.332 [2024-11-27 19:18:37.751894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.332 [2024-11-27 19:18:37.751901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.332 [2024-11-27 19:18:37.751909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.332 [2024-11-27 19:18:37.751916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.332 [2024-11-27 19:18:37.751926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.332 [2024-11-27 19:18:37.751933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:20:28.332 [2024-11-27 19:18:37.751941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.332 [2024-11-27 19:18:37.779486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.332 [2024-11-27 19:18:37.779525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.332 [2024-11-27 19:18:37.779537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.493 ms 00:20:28.332 [2024-11-27 19:18:37.779545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.332 [2024-11-27 19:18:37.779669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.332 [2024-11-27 19:18:37.779679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.333 [2024-11-27 19:18:37.779688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:28.333 [2024-11-27 19:18:37.779695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.820242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.820441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.333 [2024-11-27 19:18:37.820468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.523 ms 00:20:28.333 [2024-11-27 19:18:37.820477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.820585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.820598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.333 [2024-11-27 19:18:37.820607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:28.333 [2024-11-27 19:18:37.820615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.821116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.821179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.333 [2024-11-27 19:18:37.821198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:20:28.333 [2024-11-27 19:18:37.821207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.821352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.821364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.333 [2024-11-27 19:18:37.821373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:28.333 [2024-11-27 19:18:37.821381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.837356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.837397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.333 [2024-11-27 19:18:37.837409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.952 ms 00:20:28.333 [2024-11-27 19:18:37.837418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.851313] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:28.333 [2024-11-27 19:18:37.851498] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:28.333 [2024-11-27 19:18:37.851521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.851530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:28.333 [2024-11-27 19:18:37.851540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.992 ms 00:20:28.333 [2024-11-27 19:18:37.851548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.876820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.876868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:28.333 [2024-11-27 19:18:37.876880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.188 ms 00:20:28.333 [2024-11-27 19:18:37.876889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.889636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.889679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:28.333 [2024-11-27 19:18:37.889692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:20:28.333 [2024-11-27 19:18:37.889699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.902402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.902446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:28.333 [2024-11-27 19:18:37.902458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:20:28.333 [2024-11-27 19:18:37.902465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.333 [2024-11-27 19:18:37.903146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.333 [2024-11-27 19:18:37.903177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.333 [2024-11-27 19:18:37.903189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:20:28.333 [2024-11-27 19:18:37.903198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.965439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.965591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:28.594 [2024-11-27 19:18:37.965610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.215 ms 00:20:28.594 [2024-11-27 19:18:37.965618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.975967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:28.594 [2024-11-27 19:18:37.989705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.989738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.594 [2024-11-27 19:18:37.989749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.021 ms 00:20:28.594 [2024-11-27 19:18:37.989757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.989832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.989843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:28.594 [2024-11-27 19:18:37.989851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:28.594 [2024-11-27 19:18:37.989859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.989903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.989911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.594 [2024-11-27 19:18:37.989919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:28.594 [2024-11-27 19:18:37.989927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.989960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.989971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.594 [2024-11-27 19:18:37.989979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:28.594 [2024-11-27 19:18:37.989987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:37.990015] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:28.594 [2024-11-27 19:18:37.990024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:37.990032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:28.594 [2024-11-27 19:18:37.990040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:28.594 [2024-11-27 19:18:37.990047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:38.013882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:38.013915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.594 [2024-11-27 19:18:38.013926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 00:20:28.594 [2024-11-27 19:18:38.013934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:38.014020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.594 [2024-11-27 19:18:38.014031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.594 [2024-11-27 19:18:38.014039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:28.594 [2024-11-27 19:18:38.014047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.594 [2024-11-27 19:18:38.014859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.594 [2024-11-27 19:18:38.017872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.492 ms, result 0 00:20:28.594 [2024-11-27 19:18:38.018865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.594 [2024-11-27 19:18:38.031607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.538  [2024-11-27T19:18:40.114Z] Copying: 24/256 [MB] (24 MBps) [2024-11-27T19:18:41.058Z] Copying: 46/256 [MB] (21 MBps) [2024-11-27T19:18:42.445Z] Copying: 63/256 [MB] (17 MBps) [2024-11-27T19:18:43.388Z] Copying: 79/256 [MB] (15 MBps) [2024-11-27T19:18:44.331Z] Copying: 94/256 [MB] (15 MBps) [2024-11-27T19:18:45.374Z] Copying: 106/256 [MB] (11 MBps) [2024-11-27T19:18:46.316Z] Copying: 125/256 [MB] (19 MBps) [2024-11-27T19:18:47.261Z] Copying: 149/256 [MB] (24 MBps) [2024-11-27T19:18:48.206Z] Copying: 164/256 [MB] (14 MBps) [2024-11-27T19:18:49.151Z] Copying: 177/256 [MB] (13 MBps) [2024-11-27T19:18:50.091Z] Copying: 191/256 [MB] (13 MBps) [2024-11-27T19:18:51.035Z] Copying: 208/256 [MB] (16 MBps) [2024-11-27T19:18:52.420Z] Copying: 227/256 [MB] (19 MBps) [2024-11-27T19:18:52.993Z] Copying: 239/256 [MB] (11 MBps) [2024-11-27T19:18:52.993Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-27 19:18:52.888203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.358 [2024-11-27 19:18:52.895602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-27 19:18:52.895706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:43.358 [2024-11-27 19:18:52.895760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.358 [2024-11-27 19:18:52.895784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.358 [2024-11-27 19:18:52.895813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:43.358 [2024-11-27 19:18:52.897912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.358 [2024-11-27 19:18:52.898001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:43.359 [2024-11-27 19:18:52.898049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:20:43.359 [2024-11-27 19:18:52.898066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.899905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.899995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:43.359 [2024-11-27 19:18:52.900040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:20:43.359 [2024-11-27 19:18:52.900057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.905834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.905932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:43.359 [2024-11-27 19:18:52.905981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.754 ms 00:20:43.359 [2024-11-27 19:18:52.905999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.911403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.911494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:43.359 [2024-11-27 19:18:52.911505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:20:43.359 [2024-11-27 19:18:52.911512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.929134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.929161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:43.359 [2024-11-27 19:18:52.929170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:20:43.359 [2024-11-27 19:18:52.929175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.940856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.940887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:43.359 [2024-11-27 19:18:52.940898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.641 ms 00:20:43.359 [2024-11-27 19:18:52.940904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.940999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.941006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:43.359 [2024-11-27 19:18:52.941012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:43.359 [2024-11-27 19:18:52.941024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.958893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.958990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:43.359 [2024-11-27 19:18:52.959001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.858 ms 00:20:43.359 [2024-11-27 19:18:52.959006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.359 [2024-11-27 19:18:52.976629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.359 [2024-11-27 19:18:52.976653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:43.359 [2024-11-27 19:18:52.976660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.591 ms 00:20:43.359 [2024-11-27 19:18:52.976666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.622 [2024-11-27 19:18:52.993629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.622 [2024-11-27 19:18:52.993718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:43.622 [2024-11-27 19:18:52.993730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.934 ms 00:20:43.622 [2024-11-27 19:18:52.993735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.622 [2024-11-27 19:18:53.011147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.622 [2024-11-27 19:18:53.011173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:43.622 [2024-11-27 19:18:53.011181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.370 ms 00:20:43.622 [2024-11-27 19:18:53.011186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.622 [2024-11-27 19:18:53.011213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:43.622 [2024-11-27 19:18:53.011224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:43.622 [2024-11-27 19:18:53.011444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:43.623 [2024-11-27 19:18:53.011793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:43.623 [2024-11-27 19:18:53.011799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:20:43.623 [2024-11-27 19:18:53.011805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:43.623 [2024-11-27 19:18:53.011810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:43.623 [2024-11-27 19:18:53.011815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:43.623 [2024-11-27 19:18:53.011821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:43.623 [2024-11-27 19:18:53.011826] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:43.623 [2024-11-27 19:18:53.011831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:43.623 [2024-11-27 19:18:53.011837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:43.623 [2024-11-27 19:18:53.011842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:43.623 [2024-11-27 19:18:53.011847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:43.623 [2024-11-27 19:18:53.011852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.623 [2024-11-27 19:18:53.011859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:43.623 [2024-11-27 19:18:53.011865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:20:43.623 [2024-11-27 19:18:53.011871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.021404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.623 [2024-11-27 19:18:53.021429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:43.623 [2024-11-27 19:18:53.021436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.520 ms 00:20:43.623 [2024-11-27 19:18:53.021442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.021722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.623 [2024-11-27 19:18:53.021734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:43.623 [2024-11-27 19:18:53.021740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:43.623 [2024-11-27 19:18:53.021746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.049148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.623 [2024-11-27 19:18:53.049174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.623 [2024-11-27 19:18:53.049181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.623 [2024-11-27 19:18:53.049188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.049258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.623 [2024-11-27 19:18:53.049265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.623 [2024-11-27 19:18:53.049271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.623 [2024-11-27 19:18:53.049276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.049311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.623 [2024-11-27 19:18:53.049318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.623 [2024-11-27 19:18:53.049324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.623 [2024-11-27 19:18:53.049329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.623 [2024-11-27 19:18:53.049342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.049350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.624 [2024-11-27 19:18:53.049356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.049361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.107899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.108023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.624 [2024-11-27 19:18:53.108036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.108042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.624 [2024-11-27 19:18:53.157162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.624 [2024-11-27 19:18:53.157241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.624 [2024-11-27 19:18:53.157284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.624 [2024-11-27 19:18:53.157370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.624 [2024-11-27 19:18:53.157411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.624 [2024-11-27 19:18:53.157459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.624 [2024-11-27 19:18:53.157504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.624 [2024-11-27 19:18:53.157512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.624 [2024-11-27 19:18:53.157518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.624 [2024-11-27 19:18:53.157621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.010 ms, result 0 00:20:45.009 00:20:45.009 00:20:45.009 19:18:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76765 00:20:45.009 19:18:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76765 00:20:45.009 19:18:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:45.009 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76765 ']' 00:20:45.009 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.010 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.010 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.010 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.010 19:18:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:45.010 [2024-11-27 19:18:54.345777] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:45.010 [2024-11-27 19:18:54.346420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76765 ] 00:20:45.010 [2024-11-27 19:18:54.499097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.010 [2024-11-27 19:18:54.579084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.582 19:18:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.582 19:18:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:45.582 19:18:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:45.843 [2024-11-27 19:18:55.406894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.843 [2024-11-27 19:18:55.407081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:46.106 [2024-11-27 19:18:55.575239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.575277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.106 [2024-11-27 19:18:55.575289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.106 [2024-11-27 19:18:55.575296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.577363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.577491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.106 [2024-11-27 19:18:55.577507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:20:46.106 [2024-11-27 19:18:55.577514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.577594] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.106 [2024-11-27 19:18:55.578164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.106 [2024-11-27 19:18:55.578184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.578190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.106 [2024-11-27 19:18:55.578199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:20:46.106 [2024-11-27 19:18:55.578207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.579215] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:46.106 [2024-11-27 19:18:55.589069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.589204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:46.106 [2024-11-27 19:18:55.589218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.859 ms 00:20:46.106 [2024-11-27 19:18:55.589226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.589295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.589305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:46.106 [2024-11-27 19:18:55.589312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:46.106 [2024-11-27 19:18:55.589319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.593686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.593715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.106 [2024-11-27 19:18:55.593722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:20:46.106 [2024-11-27 19:18:55.593729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.593805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.593814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.106 [2024-11-27 19:18:55.593820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:46.106 [2024-11-27 19:18:55.593829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.593846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.593853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.106 [2024-11-27 19:18:55.593859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.106 [2024-11-27 19:18:55.593866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.593882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:46.106 [2024-11-27 19:18:55.596688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.596795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.106 [2024-11-27 19:18:55.596810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.808 ms 00:20:46.106 [2024-11-27 19:18:55.596816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.596847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.596854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.106 [2024-11-27 19:18:55.596863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.106 [2024-11-27 19:18:55.596869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.596885] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:46.106 [2024-11-27 19:18:55.596897] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:46.106 [2024-11-27 19:18:55.596928] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:46.106 [2024-11-27 19:18:55.596940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:46.106 [2024-11-27 19:18:55.597020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.106 [2024-11-27 19:18:55.597028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.106 [2024-11-27 19:18:55.597040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.106 [2024-11-27 19:18:55.597048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.106 [2024-11-27 19:18:55.597056] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.106 [2024-11-27 19:18:55.597062] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:46.106 [2024-11-27 19:18:55.597069] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.106 [2024-11-27 19:18:55.597074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.106 [2024-11-27 19:18:55.597082] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.106 [2024-11-27 19:18:55.597088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.597094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.106 [2024-11-27 19:18:55.597100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:46.106 [2024-11-27 19:18:55.597108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.597189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.106 [2024-11-27 19:18:55.597197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.106 [2024-11-27 19:18:55.597203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:46.106 [2024-11-27 19:18:55.597210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.106 [2024-11-27 19:18:55.597285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.106 [2024-11-27 19:18:55.597294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.106 [2024-11-27 19:18:55.597300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.106 [2024-11-27 19:18:55.597307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.106 [2024-11-27 19:18:55.597319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:46.106 [2024-11-27 19:18:55.597333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.106 [2024-11-27 19:18:55.597339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.106 [2024-11-27 19:18:55.597351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.106 [2024-11-27 19:18:55.597357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:46.106 [2024-11-27 19:18:55.597362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.106 [2024-11-27 19:18:55.597368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.106 [2024-11-27 19:18:55.597373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:46.106 [2024-11-27 19:18:55.597379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.106 [2024-11-27 19:18:55.597392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:46.106 [2024-11-27 19:18:55.597401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.106 [2024-11-27 19:18:55.597413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:46.106 [2024-11-27 19:18:55.597419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.107 [2024-11-27 19:18:55.597431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.107 [2024-11-27 19:18:55.597447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.107 [2024-11-27 19:18:55.597464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.107 [2024-11-27 19:18:55.597480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.107 [2024-11-27 19:18:55.597492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.107 [2024-11-27 19:18:55.597498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:46.107 [2024-11-27 19:18:55.597503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.107 [2024-11-27 19:18:55.597509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.107 [2024-11-27 19:18:55.597514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:46.107 [2024-11-27 19:18:55.597529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.107 [2024-11-27 19:18:55.597540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:46.107 [2024-11-27 19:18:55.597545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.107 [2024-11-27 19:18:55.597558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.107 [2024-11-27 19:18:55.597565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.107 [2024-11-27 19:18:55.597578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.107 [2024-11-27 19:18:55.597583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.107 [2024-11-27 19:18:55.597590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.107 [2024-11-27 19:18:55.597595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.107 [2024-11-27 19:18:55.597601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.107 [2024-11-27 19:18:55.597606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.107 [2024-11-27 19:18:55.597614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.107 [2024-11-27 19:18:55.597621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:46.107 [2024-11-27 19:18:55.597636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:46.107 [2024-11-27 19:18:55.597643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:46.107 [2024-11-27 19:18:55.597648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:46.107 [2024-11-27 19:18:55.597655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:46.107 [2024-11-27 19:18:55.597660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:46.107 [2024-11-27 19:18:55.597666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:46.107 [2024-11-27 19:18:55.597671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:46.107 [2024-11-27 19:18:55.597678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:46.107 [2024-11-27 19:18:55.597684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:46.107 [2024-11-27 19:18:55.597714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.107 [2024-11-27 19:18:55.597720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.107 [2024-11-27 19:18:55.597734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.107 [2024-11-27 19:18:55.597741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.107 [2024-11-27 19:18:55.597746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.107 [2024-11-27 19:18:55.597753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.597758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.107 [2024-11-27 19:18:55.597765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:20:46.107 [2024-11-27 19:18:55.597771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.618684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.618711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.107 [2024-11-27 19:18:55.618720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.859 ms 00:20:46.107 [2024-11-27 19:18:55.618728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.618819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.618827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:46.107 [2024-11-27 19:18:55.618834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:46.107 [2024-11-27 19:18:55.618840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.642821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.642850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.107 [2024-11-27 19:18:55.642859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.963 ms 00:20:46.107 [2024-11-27 19:18:55.642865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.642909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.642915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.107 [2024-11-27 19:18:55.642923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:46.107 [2024-11-27 19:18:55.642929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.643235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.643246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.107 [2024-11-27 19:18:55.643257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:46.107 [2024-11-27 19:18:55.643281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.643380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.643387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.107 [2024-11-27 19:18:55.643395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:46.107 [2024-11-27 19:18:55.643400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.655088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.655209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.107 [2024-11-27 19:18:55.655224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.669 ms 00:20:46.107 [2024-11-27 19:18:55.655231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.686517] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:46.107 [2024-11-27 19:18:55.686557] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:46.107 [2024-11-27 19:18:55.686576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.686585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:46.107 [2024-11-27 19:18:55.686596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.263 ms 00:20:46.107 [2024-11-27 19:18:55.686609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.706078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.706209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:46.107 [2024-11-27 19:18:55.706226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.387 ms 00:20:46.107 [2024-11-27 19:18:55.706233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.715681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.107 [2024-11-27 19:18:55.715712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:46.107 [2024-11-27 19:18:55.715724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.165 ms 00:20:46.107 [2024-11-27 19:18:55.715730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.107 [2024-11-27 19:18:55.724372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.108 [2024-11-27 19:18:55.724398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:46.108 [2024-11-27 19:18:55.724407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.593 ms 00:20:46.108 [2024-11-27 19:18:55.724413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.108 [2024-11-27 19:18:55.724868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.108 [2024-11-27 19:18:55.724881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.108 [2024-11-27 19:18:55.724890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:20:46.108 [2024-11-27 19:18:55.724895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.769881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.770027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:46.369 [2024-11-27 19:18:55.770045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.964 ms 00:20:46.369 [2024-11-27 19:18:55.770052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.778565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:46.369 [2024-11-27 19:18:55.790378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.790416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.369 [2024-11-27 19:18:55.790424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.266 ms 00:20:46.369 [2024-11-27 19:18:55.790432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.790509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.790519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:46.369 [2024-11-27 19:18:55.790526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:46.369 [2024-11-27 19:18:55.790533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.790568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.790576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.369 [2024-11-27 19:18:55.790583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:46.369 [2024-11-27 19:18:55.790591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.790609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.790617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.369 [2024-11-27 19:18:55.790622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.369 [2024-11-27 19:18:55.790631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.790654] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:46.369 [2024-11-27 19:18:55.790673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.790681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:46.369 [2024-11-27 19:18:55.790688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:46.369 [2024-11-27 19:18:55.790695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.369 [2024-11-27 19:18:55.808872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.369 [2024-11-27 19:18:55.808978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.370 [2024-11-27 19:18:55.808995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:20:46.370 [2024-11-27 19:18:55.809001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.370 [2024-11-27 19:18:55.809070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.370 [2024-11-27 19:18:55.809078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.370 [2024-11-27 19:18:55.809088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:46.370 [2024-11-27 19:18:55.809094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.370 [2024-11-27 19:18:55.809724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.370 [2024-11-27 19:18:55.811961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.272 ms, result 0 00:20:46.370 [2024-11-27 19:18:55.812906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.370 Some configs were skipped because the RPC state that can call them passed over. 00:20:46.370 19:18:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:46.631 [2024-11-27 19:18:56.037707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.631 [2024-11-27 19:18:56.037861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.631 [2024-11-27 19:18:56.037926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:20:46.631 [2024-11-27 19:18:56.037978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.631 [2024-11-27 19:18:56.038039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.100 ms, result 0 00:20:46.631 true 00:20:46.631 19:18:56 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:46.631 [2024-11-27 19:18:56.241877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.631 [2024-11-27 19:18:56.242030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.631 [2024-11-27 19:18:56.242176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:20:46.631 [2024-11-27 19:18:56.242208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.631 [2024-11-27 19:18:56.242293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.045 ms, result 0 00:20:46.631 true 00:20:46.631 19:18:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76765 00:20:46.631 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76765 ']' 00:20:46.631 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76765 00:20:46.631 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:46.631 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.631 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76765 00:20:46.892 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.892 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.892 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76765' 00:20:46.892 killing process with pid 76765 00:20:46.892 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76765 00:20:46.892 19:18:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76765 00:20:47.466 [2024-11-27 19:18:57.023711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.023809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.466 [2024-11-27 19:18:57.023826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.466 [2024-11-27 19:18:57.023838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.023869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:47.466 [2024-11-27 19:18:57.027448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.027500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.466 [2024-11-27 19:18:57.027518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:20:47.466 [2024-11-27 19:18:57.027527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.029238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.029289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.466 [2024-11-27 19:18:57.029303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:47.466 [2024-11-27 19:18:57.029312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.034085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.034179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.466 [2024-11-27 19:18:57.034195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:20:47.466 [2024-11-27 19:18:57.034206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.041193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.041516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:47.466 [2024-11-27 19:18:57.041547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:20:47.466 [2024-11-27 19:18:57.041556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.053218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.053280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.466 [2024-11-27 19:18:57.053298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.573 ms 00:20:47.466 [2024-11-27 19:18:57.053308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.062942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.062996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.466 [2024-11-27 19:18:57.063010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.569 ms 00:20:47.466 [2024-11-27 19:18:57.063018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.063222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.063236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.466 [2024-11-27 19:18:57.063249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:47.466 [2024-11-27 19:18:57.063257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.075068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.075119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:47.466 [2024-11-27 19:18:57.075151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.781 ms 00:20:47.466 [2024-11-27 19:18:57.075159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.086368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.086416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:47.466 [2024-11-27 19:18:57.086435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.146 ms 00:20:47.466 [2024-11-27 19:18:57.086444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.466 [2024-11-27 19:18:57.096938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.466 [2024-11-27 19:18:57.097192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.466 [2024-11-27 19:18:57.097221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.429 ms 00:20:47.466 [2024-11-27 19:18:57.097229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.729 [2024-11-27 19:18:57.107882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.729 [2024-11-27 19:18:57.107935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.729 [2024-11-27 19:18:57.107950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.423 ms 00:20:47.729 [2024-11-27 19:18:57.107958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.729 [2024-11-27 19:18:57.108013] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.729 [2024-11-27 19:18:57.108032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.729 [2024-11-27 19:18:57.108274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.108988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.730 [2024-11-27 19:18:57.109067] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.730 [2024-11-27 19:18:57.109080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:20:47.730 [2024-11-27 19:18:57.109093] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.730 [2024-11-27 19:18:57.109104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.730 [2024-11-27 19:18:57.109112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.730 [2024-11-27 19:18:57.109138] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.730 [2024-11-27 19:18:57.109147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.730 [2024-11-27 19:18:57.109158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.730 [2024-11-27 19:18:57.109166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.730 [2024-11-27 19:18:57.109178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.730 [2024-11-27 19:18:57.109186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.730 [2024-11-27 19:18:57.109197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.730 [2024-11-27 19:18:57.109205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.730 [2024-11-27 19:18:57.109218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:20:47.730 [2024-11-27 19:18:57.109229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.124588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.731 [2024-11-27 19:18:57.124635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.731 [2024-11-27 19:18:57.124653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.314 ms 00:20:47.731 [2024-11-27 19:18:57.124661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.125173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.731 [2024-11-27 19:18:57.125198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.731 [2024-11-27 19:18:57.125218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:20:47.731 [2024-11-27 19:18:57.125227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.178835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.178889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.731 [2024-11-27 19:18:57.178905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.178914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.179032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.179046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.731 [2024-11-27 19:18:57.179061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.179070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.179157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.179170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.731 [2024-11-27 19:18:57.179185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.179206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.179228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.179246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.731 [2024-11-27 19:18:57.179258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.179350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.272754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.272819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.731 [2024-11-27 19:18:57.272838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.272848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.349538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.349606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.731 [2024-11-27 19:18:57.349627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.349636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.349730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.349742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.731 [2024-11-27 19:18:57.349758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.349767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.349807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.349818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.731 [2024-11-27 19:18:57.349831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.349839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.349954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.349967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.731 [2024-11-27 19:18:57.349979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.349987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.350030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.350041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.731 [2024-11-27 19:18:57.350053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.350063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.350158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.350171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.731 [2024-11-27 19:18:57.350187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.350196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.350262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.731 [2024-11-27 19:18:57.350275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.731 [2024-11-27 19:18:57.350287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.731 [2024-11-27 19:18:57.350296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.731 [2024-11-27 19:18:57.350518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.774 ms, result 0 00:20:48.675 19:18:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:48.675 19:18:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.675 [2024-11-27 19:18:58.174436] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:48.675 [2024-11-27 19:18:58.174574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76818 ] 00:20:48.935 [2024-11-27 19:18:58.337673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.935 [2024-11-27 19:18:58.461681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.196 [2024-11-27 19:18:58.760144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.196 [2024-11-27 19:18:58.760505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.458 [2024-11-27 19:18:58.924174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.924236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:49.458 [2024-11-27 19:18:58.924252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:49.458 [2024-11-27 19:18:58.924262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.927266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.927318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.458 [2024-11-27 19:18:58.927330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:20:49.458 [2024-11-27 19:18:58.927338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.927463] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:49.458 [2024-11-27 19:18:58.928327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:49.458 [2024-11-27 19:18:58.928380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.928389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.458 [2024-11-27 19:18:58.928400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:20:49.458 [2024-11-27 19:18:58.928408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.930255] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:49.458 [2024-11-27 19:18:58.944593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.944645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:49.458 [2024-11-27 19:18:58.944659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.340 ms 00:20:49.458 [2024-11-27 19:18:58.944668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.944793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.944806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:49.458 [2024-11-27 19:18:58.944816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:49.458 [2024-11-27 19:18:58.944824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.953280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.953325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.458 [2024-11-27 19:18:58.953336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.409 ms 00:20:49.458 [2024-11-27 19:18:58.953344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.953456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.953466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.458 [2024-11-27 19:18:58.953476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:49.458 [2024-11-27 19:18:58.953485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.953515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.953524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:49.458 [2024-11-27 19:18:58.953533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.458 [2024-11-27 19:18:58.953541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.953564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:49.458 [2024-11-27 19:18:58.957700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.957759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.458 [2024-11-27 19:18:58.957771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.143 ms 00:20:49.458 [2024-11-27 19:18:58.957779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.957859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.957870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:49.458 [2024-11-27 19:18:58.957879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:49.458 [2024-11-27 19:18:58.957888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.458 [2024-11-27 19:18:58.957914] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:49.458 [2024-11-27 19:18:58.957935] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:49.458 [2024-11-27 19:18:58.957973] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:49.458 [2024-11-27 19:18:58.957990] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:49.458 [2024-11-27 19:18:58.958097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:49.458 [2024-11-27 19:18:58.958108] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:49.458 [2024-11-27 19:18:58.958148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:49.458 [2024-11-27 19:18:58.958164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:49.458 [2024-11-27 19:18:58.958174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:49.458 [2024-11-27 19:18:58.958183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:49.458 [2024-11-27 19:18:58.958191] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:49.458 [2024-11-27 19:18:58.958199] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:49.458 [2024-11-27 19:18:58.958207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:49.458 [2024-11-27 19:18:58.958216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.458 [2024-11-27 19:18:58.958225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:49.458 [2024-11-27 19:18:58.958234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:49.459 [2024-11-27 19:18:58.958241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:58.958329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:58.958342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:49.459 [2024-11-27 19:18:58.958349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:49.459 [2024-11-27 19:18:58.958357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:58.958460] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:49.459 [2024-11-27 19:18:58.958472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:49.459 [2024-11-27 19:18:58.958480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:49.459 [2024-11-27 19:18:58.958504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:49.459 [2024-11-27 19:18:58.958524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.459 [2024-11-27 19:18:58.958539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:49.459 [2024-11-27 19:18:58.958553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:49.459 [2024-11-27 19:18:58.958560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.459 [2024-11-27 19:18:58.958567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:49.459 [2024-11-27 19:18:58.958573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:49.459 [2024-11-27 19:18:58.958579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:49.459 [2024-11-27 19:18:58.958596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:49.459 [2024-11-27 19:18:58.958617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:49.459 [2024-11-27 19:18:58.958638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:49.459 [2024-11-27 19:18:58.958659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:49.459 [2024-11-27 19:18:58.958695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:49.459 [2024-11-27 19:18:58.958715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.459 [2024-11-27 19:18:58.958728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:49.459 [2024-11-27 19:18:58.958735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:49.459 [2024-11-27 19:18:58.958743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.459 [2024-11-27 19:18:58.958749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:49.459 [2024-11-27 19:18:58.958756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:49.459 [2024-11-27 19:18:58.958763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:49.459 [2024-11-27 19:18:58.958777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:49.459 [2024-11-27 19:18:58.958784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958791] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:49.459 [2024-11-27 19:18:58.958800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:49.459 [2024-11-27 19:18:58.958810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.459 [2024-11-27 19:18:58.958829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:49.459 [2024-11-27 19:18:58.958837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:49.459 [2024-11-27 19:18:58.958844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:49.459 [2024-11-27 19:18:58.958851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:49.459 [2024-11-27 19:18:58.958857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:49.459 [2024-11-27 19:18:58.958865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:49.459 [2024-11-27 19:18:58.958873] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:49.459 [2024-11-27 19:18:58.958884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.958893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:49.459 [2024-11-27 19:18:58.958900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:49.459 [2024-11-27 19:18:58.958908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:49.459 [2024-11-27 19:18:58.958915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:49.459 [2024-11-27 19:18:58.958922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:49.459 [2024-11-27 19:18:58.958930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:49.459 [2024-11-27 19:18:58.958937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:49.459 [2024-11-27 19:18:58.958946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:49.459 [2024-11-27 19:18:58.958952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:49.459 [2024-11-27 19:18:58.958960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.958967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.958974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.958981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.958989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:49.459 [2024-11-27 19:18:58.958997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:49.459 [2024-11-27 19:18:58.959006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.959014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:49.459 [2024-11-27 19:18:58.959022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:49.459 [2024-11-27 19:18:58.959029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:49.459 [2024-11-27 19:18:58.959037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:49.459 [2024-11-27 19:18:58.959045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:58.959056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:49.459 [2024-11-27 19:18:58.959064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:20:49.459 [2024-11-27 19:18:58.959072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:58.991812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:58.991863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.459 [2024-11-27 19:18:58.991875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.664 ms 00:20:49.459 [2024-11-27 19:18:58.991883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:58.992022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:58.992033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.459 [2024-11-27 19:18:58.992043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:49.459 [2024-11-27 19:18:58.992051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.037667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.037725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.459 [2024-11-27 19:18:59.037742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.593 ms 00:20:49.459 [2024-11-27 19:18:59.037751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.037868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.037881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.459 [2024-11-27 19:18:59.037891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:49.459 [2024-11-27 19:18:59.037899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.038536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.038567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.459 [2024-11-27 19:18:59.038589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:20:49.459 [2024-11-27 19:18:59.038598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.038785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.038805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.459 [2024-11-27 19:18:59.038815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:49.459 [2024-11-27 19:18:59.038823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.055856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.055904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.459 [2024-11-27 19:18:59.055916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.007 ms 00:20:49.459 [2024-11-27 19:18:59.055924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.459 [2024-11-27 19:18:59.070697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:49.459 [2024-11-27 19:18:59.070749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:49.459 [2024-11-27 19:18:59.070764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.459 [2024-11-27 19:18:59.070772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:49.459 [2024-11-27 19:18:59.070782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:20:49.459 [2024-11-27 19:18:59.070790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.097895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.097951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:49.730 [2024-11-27 19:18:59.097965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.005 ms 00:20:49.730 [2024-11-27 19:18:59.097974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.111250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.111299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:49.730 [2024-11-27 19:18:59.111311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.176 ms 00:20:49.730 [2024-11-27 19:18:59.111318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.124590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.124638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:49.730 [2024-11-27 19:18:59.124650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.177 ms 00:20:49.730 [2024-11-27 19:18:59.124658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.125365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.125390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.730 [2024-11-27 19:18:59.125402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:20:49.730 [2024-11-27 19:18:59.125410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.194631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.194718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:49.730 [2024-11-27 19:18:59.194736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.193 ms 00:20:49.730 [2024-11-27 19:18:59.194745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.206397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:49.730 [2024-11-27 19:18:59.225926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.225981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:49.730 [2024-11-27 19:18:59.225995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.075 ms 00:20:49.730 [2024-11-27 19:18:59.226010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.226104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.226117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:49.730 [2024-11-27 19:18:59.226157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:49.730 [2024-11-27 19:18:59.226167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.226226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.226236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:49.730 [2024-11-27 19:18:59.226245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:49.730 [2024-11-27 19:18:59.226257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.226288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.226298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:49.730 [2024-11-27 19:18:59.226307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:49.730 [2024-11-27 19:18:59.226316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.226354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:49.730 [2024-11-27 19:18:59.226366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.226374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:49.730 [2024-11-27 19:18:59.226383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:49.730 [2024-11-27 19:18:59.226391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.252341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.252554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:49.730 [2024-11-27 19:18:59.252580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.929 ms 00:20:49.730 [2024-11-27 19:18:59.252591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.252828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.730 [2024-11-27 19:18:59.252857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:49.730 [2024-11-27 19:18:59.252868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:49.730 [2024-11-27 19:18:59.252876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.730 [2024-11-27 19:18:59.254012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.730 [2024-11-27 19:18:59.257723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.524 ms, result 0 00:20:49.730 [2024-11-27 19:18:59.258904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.730 [2024-11-27 19:18:59.272634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.674  [2024-11-27T19:19:01.698Z] Copying: 20/256 [MB] (20 MBps) [2024-11-27T19:19:02.641Z] Copying: 36/256 [MB] (16 MBps) [2024-11-27T19:19:03.586Z] Copying: 56/256 [MB] (19 MBps) [2024-11-27T19:19:04.530Z] Copying: 72/256 [MB] (16 MBps) [2024-11-27T19:19:05.473Z] Copying: 90/256 [MB] (17 MBps) [2024-11-27T19:19:06.417Z] Copying: 112/256 [MB] (22 MBps) [2024-11-27T19:19:07.361Z] Copying: 130/256 [MB] (18 MBps) [2024-11-27T19:19:08.303Z] Copying: 150/256 [MB] (19 MBps) [2024-11-27T19:19:09.689Z] Copying: 165/256 [MB] (14 MBps) [2024-11-27T19:19:10.633Z] Copying: 182/256 [MB] (17 MBps) [2024-11-27T19:19:11.577Z] Copying: 199/256 [MB] (16 MBps) [2024-11-27T19:19:12.518Z] Copying: 209/256 [MB] (10 MBps) [2024-11-27T19:19:13.461Z] Copying: 220/256 [MB] (10 MBps) [2024-11-27T19:19:14.448Z] Copying: 232/256 [MB] (11 MBps) [2024-11-27T19:19:14.709Z] Copying: 251/256 [MB] (19 MBps) [2024-11-27T19:19:14.709Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-27 19:19:14.478001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.074 [2024-11-27 19:19:14.488330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.074 [2024-11-27 19:19:14.488541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:05.074 [2024-11-27 19:19:14.488575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.074 [2024-11-27 19:19:14.488585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.074 [2024-11-27 19:19:14.488619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:05.074 [2024-11-27 19:19:14.491657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.074 [2024-11-27 19:19:14.491816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:05.074 [2024-11-27 19:19:14.491837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:21:05.074 [2024-11-27 19:19:14.491846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.074 [2024-11-27 19:19:14.492120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.074 [2024-11-27 19:19:14.492154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:05.074 [2024-11-27 19:19:14.492164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:21:05.074 [2024-11-27 19:19:14.492172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.074 [2024-11-27 19:19:14.495862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.074 [2024-11-27 19:19:14.495884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:05.075 [2024-11-27 19:19:14.495894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:21:05.075 [2024-11-27 19:19:14.495903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.502858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.503019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:05.075 [2024-11-27 19:19:14.503038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:21:05.075 [2024-11-27 19:19:14.503046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.528562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.528609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:05.075 [2024-11-27 19:19:14.528622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.446 ms 00:21:05.075 [2024-11-27 19:19:14.528630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.545883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.546076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:05.075 [2024-11-27 19:19:14.546106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.203 ms 00:21:05.075 [2024-11-27 19:19:14.546115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.546311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.546324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:05.075 [2024-11-27 19:19:14.546346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:05.075 [2024-11-27 19:19:14.546354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.572108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.572168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:05.075 [2024-11-27 19:19:14.572180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:21:05.075 [2024-11-27 19:19:14.572187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.597494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.597536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:05.075 [2024-11-27 19:19:14.597548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.244 ms 00:21:05.075 [2024-11-27 19:19:14.597555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.622727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.622773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:05.075 [2024-11-27 19:19:14.622786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:21:05.075 [2024-11-27 19:19:14.622793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.647433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.075 [2024-11-27 19:19:14.647477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:05.075 [2024-11-27 19:19:14.647489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.547 ms 00:21:05.075 [2024-11-27 19:19:14.647496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.075 [2024-11-27 19:19:14.647561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:05.075 [2024-11-27 19:19:14.647577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.647997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.648005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.648012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.648019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.648027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:05.075 [2024-11-27 19:19:14.648034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:05.076 [2024-11-27 19:19:14.648405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:05.076 [2024-11-27 19:19:14.648413] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:21:05.076 [2024-11-27 19:19:14.648422] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:05.076 [2024-11-27 19:19:14.648430] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:05.076 [2024-11-27 19:19:14.648437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:05.076 [2024-11-27 19:19:14.648445] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:05.076 [2024-11-27 19:19:14.648452] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:05.076 [2024-11-27 19:19:14.648460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:05.076 [2024-11-27 19:19:14.648471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:05.076 [2024-11-27 19:19:14.648478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:05.076 [2024-11-27 19:19:14.648485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:05.076 [2024-11-27 19:19:14.648492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.076 [2024-11-27 19:19:14.648499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:05.076 [2024-11-27 19:19:14.648508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:21:05.076 [2024-11-27 19:19:14.648516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.662041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.076 [2024-11-27 19:19:14.662082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:05.076 [2024-11-27 19:19:14.662093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.492 ms 00:21:05.076 [2024-11-27 19:19:14.662101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.662549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.076 [2024-11-27 19:19:14.662572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:05.076 [2024-11-27 19:19:14.662582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:21:05.076 [2024-11-27 19:19:14.662590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.701366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.076 [2024-11-27 19:19:14.701495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.076 [2024-11-27 19:19:14.701510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.076 [2024-11-27 19:19:14.701523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.701592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.076 [2024-11-27 19:19:14.701601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.076 [2024-11-27 19:19:14.701609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.076 [2024-11-27 19:19:14.701617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.701657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.076 [2024-11-27 19:19:14.701666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.076 [2024-11-27 19:19:14.701674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.076 [2024-11-27 19:19:14.701681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.076 [2024-11-27 19:19:14.701701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.076 [2024-11-27 19:19:14.701708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.076 [2024-11-27 19:19:14.701715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.076 [2024-11-27 19:19:14.701722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.336 [2024-11-27 19:19:14.778571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.336 [2024-11-27 19:19:14.778610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.336 [2024-11-27 19:19:14.778621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.336 [2024-11-27 19:19:14.778629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.336 [2024-11-27 19:19:14.843573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.336 [2024-11-27 19:19:14.843625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.336 [2024-11-27 19:19:14.843637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.336 [2024-11-27 19:19:14.843646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.336 [2024-11-27 19:19:14.843701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.336 [2024-11-27 19:19:14.843711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.336 [2024-11-27 19:19:14.843719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.336 [2024-11-27 19:19:14.843728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.336 [2024-11-27 19:19:14.843759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.336 [2024-11-27 19:19:14.843775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.336 [2024-11-27 19:19:14.843783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.336 [2024-11-27 19:19:14.843792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.336 [2024-11-27 19:19:14.843890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.337 [2024-11-27 19:19:14.843901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.337 [2024-11-27 19:19:14.843911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.337 [2024-11-27 19:19:14.843919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.337 [2024-11-27 19:19:14.843952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.337 [2024-11-27 19:19:14.843962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:05.337 [2024-11-27 19:19:14.843974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.337 [2024-11-27 19:19:14.843982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.337 [2024-11-27 19:19:14.844024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.337 [2024-11-27 19:19:14.844034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.337 [2024-11-27 19:19:14.844043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.337 [2024-11-27 19:19:14.844051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.337 [2024-11-27 19:19:14.844098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.337 [2024-11-27 19:19:14.844112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.337 [2024-11-27 19:19:14.844121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.337 [2024-11-27 19:19:14.844161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.337 [2024-11-27 19:19:14.844334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.977 ms, result 0 00:21:05.905 00:21:05.905 00:21:06.166 19:19:15 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:06.166 19:19:15 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:06.739 19:19:16 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:06.739 [2024-11-27 19:19:16.197361] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:06.739 [2024-11-27 19:19:16.197481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77006 ] 00:21:06.739 [2024-11-27 19:19:16.355947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.003 [2024-11-27 19:19:16.452452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.265 [2024-11-27 19:19:16.715411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.265 [2024-11-27 19:19:16.715627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.265 [2024-11-27 19:19:16.874066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.265 [2024-11-27 19:19:16.874117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.265 [2024-11-27 19:19:16.874145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:07.265 [2024-11-27 19:19:16.874154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.265 [2024-11-27 19:19:16.876872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.265 [2024-11-27 19:19:16.876912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.265 [2024-11-27 19:19:16.876923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:21:07.265 [2024-11-27 19:19:16.876931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.265 [2024-11-27 19:19:16.877011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.265 [2024-11-27 19:19:16.877687] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.265 [2024-11-27 19:19:16.877707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.265 [2024-11-27 19:19:16.877715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.265 [2024-11-27 19:19:16.877725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:21:07.265 [2024-11-27 19:19:16.877732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.265 [2024-11-27 19:19:16.879998] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:07.265 [2024-11-27 19:19:16.893257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.265 [2024-11-27 19:19:16.893412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:07.265 [2024-11-27 19:19:16.893431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.261 ms 00:21:07.265 [2024-11-27 19:19:16.893441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.265 [2024-11-27 19:19:16.893803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.265 [2024-11-27 19:19:16.893832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:07.265 [2024-11-27 19:19:16.893844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:07.265 [2024-11-27 19:19:16.893852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.899665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.899702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.527 [2024-11-27 19:19:16.899713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.768 ms 00:21:07.527 [2024-11-27 19:19:16.899720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.899812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.899822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.527 [2024-11-27 19:19:16.899831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:07.527 [2024-11-27 19:19:16.899839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.899867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.899875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.527 [2024-11-27 19:19:16.899883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:07.527 [2024-11-27 19:19:16.899890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.899911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:07.527 [2024-11-27 19:19:16.903460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.903609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.527 [2024-11-27 19:19:16.903626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:21:07.527 [2024-11-27 19:19:16.903634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.903674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.903684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.527 [2024-11-27 19:19:16.903692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:07.527 [2024-11-27 19:19:16.903699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.903733] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:07.527 [2024-11-27 19:19:16.903752] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:07.527 [2024-11-27 19:19:16.903787] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:07.527 [2024-11-27 19:19:16.903807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:07.527 [2024-11-27 19:19:16.903910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.527 [2024-11-27 19:19:16.903921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.527 [2024-11-27 19:19:16.903931] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:07.527 [2024-11-27 19:19:16.903944] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.527 [2024-11-27 19:19:16.903953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.527 [2024-11-27 19:19:16.903961] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:07.527 [2024-11-27 19:19:16.903968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.527 [2024-11-27 19:19:16.903975] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.527 [2024-11-27 19:19:16.903983] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.527 [2024-11-27 19:19:16.903990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.527 [2024-11-27 19:19:16.903998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.527 [2024-11-27 19:19:16.904006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:07.527 [2024-11-27 19:19:16.904012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.527 [2024-11-27 19:19:16.904112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.904146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.528 [2024-11-27 19:19:16.904154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:07.528 [2024-11-27 19:19:16.904162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.904264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.528 [2024-11-27 19:19:16.904275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.528 [2024-11-27 19:19:16.904283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.528 [2024-11-27 19:19:16.904305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.528 [2024-11-27 19:19:16.904325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.528 [2024-11-27 19:19:16.904339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.528 [2024-11-27 19:19:16.904352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:07.528 [2024-11-27 19:19:16.904358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.528 [2024-11-27 19:19:16.904365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.528 [2024-11-27 19:19:16.904374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:07.528 [2024-11-27 19:19:16.904380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.528 [2024-11-27 19:19:16.904394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.528 [2024-11-27 19:19:16.904414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.528 [2024-11-27 19:19:16.904433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.528 [2024-11-27 19:19:16.904452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.528 [2024-11-27 19:19:16.904472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.528 [2024-11-27 19:19:16.904491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.528 [2024-11-27 19:19:16.904503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.528 [2024-11-27 19:19:16.904510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:07.528 [2024-11-27 19:19:16.904516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.528 [2024-11-27 19:19:16.904523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.528 [2024-11-27 19:19:16.904530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:07.528 [2024-11-27 19:19:16.904536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.528 [2024-11-27 19:19:16.904549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:07.528 [2024-11-27 19:19:16.904555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904562] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.528 [2024-11-27 19:19:16.904569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.528 [2024-11-27 19:19:16.904578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.528 [2024-11-27 19:19:16.904594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.528 [2024-11-27 19:19:16.904601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.528 [2024-11-27 19:19:16.904607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.528 [2024-11-27 19:19:16.904614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.528 [2024-11-27 19:19:16.904621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.528 [2024-11-27 19:19:16.904627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.528 [2024-11-27 19:19:16.904636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.528 [2024-11-27 19:19:16.904644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:07.528 [2024-11-27 19:19:16.904659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:07.528 [2024-11-27 19:19:16.904666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:07.528 [2024-11-27 19:19:16.904673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:07.528 [2024-11-27 19:19:16.904680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:07.528 [2024-11-27 19:19:16.904687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:07.528 [2024-11-27 19:19:16.904693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:07.528 [2024-11-27 19:19:16.904700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:07.528 [2024-11-27 19:19:16.904707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:07.528 [2024-11-27 19:19:16.904714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:07.528 [2024-11-27 19:19:16.904749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.528 [2024-11-27 19:19:16.904758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.528 [2024-11-27 19:19:16.904774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.528 [2024-11-27 19:19:16.904781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.528 [2024-11-27 19:19:16.904788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.528 [2024-11-27 19:19:16.904795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.904806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.528 [2024-11-27 19:19:16.904813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:21:07.528 [2024-11-27 19:19:16.904821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.932323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.932363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.528 [2024-11-27 19:19:16.932374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.450 ms 00:21:07.528 [2024-11-27 19:19:16.932382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.932503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.932513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.528 [2024-11-27 19:19:16.932522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:07.528 [2024-11-27 19:19:16.932530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.978808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.978851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.528 [2024-11-27 19:19:16.978867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.257 ms 00:21:07.528 [2024-11-27 19:19:16.978875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.978969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.528 [2024-11-27 19:19:16.978981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.528 [2024-11-27 19:19:16.978990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.528 [2024-11-27 19:19:16.978997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.528 [2024-11-27 19:19:16.979396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:16.979413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.529 [2024-11-27 19:19:16.979429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:21:07.529 [2024-11-27 19:19:16.979437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:16.979570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:16.979584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.529 [2024-11-27 19:19:16.979593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:21:07.529 [2024-11-27 19:19:16.979600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:16.993766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:16.993911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.529 [2024-11-27 19:19:16.993928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.145 ms 00:21:07.529 [2024-11-27 19:19:16.993936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.007270] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:07.529 [2024-11-27 19:19:17.007307] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:07.529 [2024-11-27 19:19:17.007319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.007327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:07.529 [2024-11-27 19:19:17.007335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.286 ms 00:21:07.529 [2024-11-27 19:19:17.007342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.032275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.032415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:07.529 [2024-11-27 19:19:17.032433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.857 ms 00:21:07.529 [2024-11-27 19:19:17.032441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.044562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.044596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:07.529 [2024-11-27 19:19:17.044606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:21:07.529 [2024-11-27 19:19:17.044613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.056474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.056518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:07.529 [2024-11-27 19:19:17.056528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.792 ms 00:21:07.529 [2024-11-27 19:19:17.056535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.057171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.057198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.529 [2024-11-27 19:19:17.057208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:07.529 [2024-11-27 19:19:17.057216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.117160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.117213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:07.529 [2024-11-27 19:19:17.117227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.919 ms 00:21:07.529 [2024-11-27 19:19:17.117236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.128104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:07.529 [2024-11-27 19:19:17.146226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.146274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.529 [2024-11-27 19:19:17.146286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.883 ms 00:21:07.529 [2024-11-27 19:19:17.146301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.146391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.146402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:07.529 [2024-11-27 19:19:17.146411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:07.529 [2024-11-27 19:19:17.146419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.146473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.146483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.529 [2024-11-27 19:19:17.146491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:07.529 [2024-11-27 19:19:17.146503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.146534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.146543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:07.529 [2024-11-27 19:19:17.146551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:07.529 [2024-11-27 19:19:17.146559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.529 [2024-11-27 19:19:17.146595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:07.529 [2024-11-27 19:19:17.146606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.529 [2024-11-27 19:19:17.146614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:07.529 [2024-11-27 19:19:17.146622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:07.529 [2024-11-27 19:19:17.146629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.789 [2024-11-27 19:19:17.172448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.789 [2024-11-27 19:19:17.172497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:07.790 [2024-11-27 19:19:17.172511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.797 ms 00:21:07.790 [2024-11-27 19:19:17.172520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.790 [2024-11-27 19:19:17.172640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.790 [2024-11-27 19:19:17.172652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:07.790 [2024-11-27 19:19:17.172662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:07.790 [2024-11-27 19:19:17.172671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.790 [2024-11-27 19:19:17.173742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.790 [2024-11-27 19:19:17.177216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.345 ms, result 0 00:21:07.790 [2024-11-27 19:19:17.178592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:07.790 [2024-11-27 19:19:17.192550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.052  [2024-11-27T19:19:17.687Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-11-27 19:19:17.501743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:08.052 [2024-11-27 19:19:17.510663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.510704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:08.052 [2024-11-27 19:19:17.510722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:08.052 [2024-11-27 19:19:17.510729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.510751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:08.052 [2024-11-27 19:19:17.513398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.513425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:08.052 [2024-11-27 19:19:17.513435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:21:08.052 [2024-11-27 19:19:17.513444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.516240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.516361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:08.052 [2024-11-27 19:19:17.516377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:21:08.052 [2024-11-27 19:19:17.516385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.520758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.520788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:08.052 [2024-11-27 19:19:17.520797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:21:08.052 [2024-11-27 19:19:17.520804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.527739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.527853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:08.052 [2024-11-27 19:19:17.527870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.907 ms 00:21:08.052 [2024-11-27 19:19:17.527877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.551583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.551618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:08.052 [2024-11-27 19:19:17.551629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:21:08.052 [2024-11-27 19:19:17.551636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.566275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.566419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:08.052 [2024-11-27 19:19:17.566437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.602 ms 00:21:08.052 [2024-11-27 19:19:17.566444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.566563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.566572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:08.052 [2024-11-27 19:19:17.566589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:08.052 [2024-11-27 19:19:17.566597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.591014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.591053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:08.052 [2024-11-27 19:19:17.591063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.401 ms 00:21:08.052 [2024-11-27 19:19:17.591069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.615398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.615441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:08.052 [2024-11-27 19:19:17.615452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.286 ms 00:21:08.052 [2024-11-27 19:19:17.615458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.639852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.639892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:08.052 [2024-11-27 19:19:17.639904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:21:08.052 [2024-11-27 19:19:17.639910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.664426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.052 [2024-11-27 19:19:17.664472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:08.052 [2024-11-27 19:19:17.664484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.439 ms 00:21:08.052 [2024-11-27 19:19:17.664492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.052 [2024-11-27 19:19:17.664540] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:08.052 [2024-11-27 19:19:17.664555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:08.052 [2024-11-27 19:19:17.664566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:08.052 [2024-11-27 19:19:17.664574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:08.052 [2024-11-27 19:19:17.664582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:08.052 [2024-11-27 19:19:17.664590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:08.052 [2024-11-27 19:19:17.664598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.664997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:08.053 [2024-11-27 19:19:17.665325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:08.054 [2024-11-27 19:19:17.665380] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:08.054 [2024-11-27 19:19:17.665388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:21:08.054 [2024-11-27 19:19:17.665397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:08.054 [2024-11-27 19:19:17.665404] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:08.054 [2024-11-27 19:19:17.665412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:08.054 [2024-11-27 19:19:17.665420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:08.054 [2024-11-27 19:19:17.665427] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:08.054 [2024-11-27 19:19:17.665441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:08.054 [2024-11-27 19:19:17.665452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:08.054 [2024-11-27 19:19:17.665459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:08.054 [2024-11-27 19:19:17.665465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:08.054 [2024-11-27 19:19:17.665473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.054 [2024-11-27 19:19:17.665481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:08.054 [2024-11-27 19:19:17.665490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:21:08.054 [2024-11-27 19:19:17.665498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.054 [2024-11-27 19:19:17.678883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.054 [2024-11-27 19:19:17.678928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:08.054 [2024-11-27 19:19:17.678939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.351 ms 00:21:08.054 [2024-11-27 19:19:17.678948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.054 [2024-11-27 19:19:17.679382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.054 [2024-11-27 19:19:17.679399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:08.054 [2024-11-27 19:19:17.679408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:21:08.054 [2024-11-27 19:19:17.679416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.718076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.718148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.315 [2024-11-27 19:19:17.718161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.718176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.718257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.718267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.315 [2024-11-27 19:19:17.718276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.718284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.718341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.718351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.315 [2024-11-27 19:19:17.718359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.718367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.718390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.718399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.315 [2024-11-27 19:19:17.718407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.718415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.801937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.801990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.315 [2024-11-27 19:19:17.802004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.802021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.315 [2024-11-27 19:19:17.870492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:08.315 [2024-11-27 19:19:17.870581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:08.315 [2024-11-27 19:19:17.870648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:08.315 [2024-11-27 19:19:17.870785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:08.315 [2024-11-27 19:19:17.870853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:08.315 [2024-11-27 19:19:17.870922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.870931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.870981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:08.315 [2024-11-27 19:19:17.870995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:08.315 [2024-11-27 19:19:17.871004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:08.315 [2024-11-27 19:19:17.871012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.315 [2024-11-27 19:19:17.871200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.486 ms, result 0 00:21:09.256 00:21:09.256 00:21:09.256 19:19:18 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77037 00:21:09.256 19:19:18 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77037 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77037 ']' 00:21:09.256 19:19:18 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.256 19:19:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:09.256 [2024-11-27 19:19:18.730279] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:09.256 [2024-11-27 19:19:18.730948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77037 ] 00:21:09.256 [2024-11-27 19:19:18.887106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.516 [2024-11-27 19:19:18.988897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.089 19:19:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.089 19:19:19 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:10.089 19:19:19 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:10.349 [2024-11-27 19:19:19.848967] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.349 [2024-11-27 19:19:19.849031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.612 [2024-11-27 19:19:20.024956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.025012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:10.612 [2024-11-27 19:19:20.025029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:10.612 [2024-11-27 19:19:20.025038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.027860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.028036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:10.612 [2024-11-27 19:19:20.028060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:21:10.612 [2024-11-27 19:19:20.028068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.028307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:10.612 [2024-11-27 19:19:20.029077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:10.612 [2024-11-27 19:19:20.029110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.029118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:10.612 [2024-11-27 19:19:20.029141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:21:10.612 [2024-11-27 19:19:20.029151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.030594] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:10.612 [2024-11-27 19:19:20.044173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.044225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:10.612 [2024-11-27 19:19:20.044239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.585 ms 00:21:10.612 [2024-11-27 19:19:20.044249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.044358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.044372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:10.612 [2024-11-27 19:19:20.044381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:10.612 [2024-11-27 19:19:20.044391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.052111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.052173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:10.612 [2024-11-27 19:19:20.052184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.667 ms 00:21:10.612 [2024-11-27 19:19:20.052193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.052308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.052321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:10.612 [2024-11-27 19:19:20.052330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:10.612 [2024-11-27 19:19:20.052344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.052369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.052379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:10.612 [2024-11-27 19:19:20.052387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:10.612 [2024-11-27 19:19:20.052396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.052420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:10.612 [2024-11-27 19:19:20.056345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.056382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:10.612 [2024-11-27 19:19:20.056395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:21:10.612 [2024-11-27 19:19:20.056402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.056476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.612 [2024-11-27 19:19:20.056486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:10.612 [2024-11-27 19:19:20.056499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:10.612 [2024-11-27 19:19:20.056507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.612 [2024-11-27 19:19:20.056532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:10.612 [2024-11-27 19:19:20.056550] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:10.612 [2024-11-27 19:19:20.056595] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:10.612 [2024-11-27 19:19:20.056611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:10.613 [2024-11-27 19:19:20.056718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:10.613 [2024-11-27 19:19:20.056729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:10.613 [2024-11-27 19:19:20.056746] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:10.613 [2024-11-27 19:19:20.056756] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:10.613 [2024-11-27 19:19:20.056767] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:10.613 [2024-11-27 19:19:20.056776] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:10.613 [2024-11-27 19:19:20.056785] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:10.613 [2024-11-27 19:19:20.056793] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:10.613 [2024-11-27 19:19:20.056804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:10.613 [2024-11-27 19:19:20.056812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.613 [2024-11-27 19:19:20.056821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:10.613 [2024-11-27 19:19:20.056829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:10.613 [2024-11-27 19:19:20.056839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.613 [2024-11-27 19:19:20.056926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.613 [2024-11-27 19:19:20.056936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:10.613 [2024-11-27 19:19:20.056944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:10.613 [2024-11-27 19:19:20.056952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.613 [2024-11-27 19:19:20.057053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:10.613 [2024-11-27 19:19:20.057065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:10.613 [2024-11-27 19:19:20.057073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:10.613 [2024-11-27 19:19:20.057110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:10.613 [2024-11-27 19:19:20.057165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.613 [2024-11-27 19:19:20.057181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:10.613 [2024-11-27 19:19:20.057190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:10.613 [2024-11-27 19:19:20.057196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:10.613 [2024-11-27 19:19:20.057207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:10.613 [2024-11-27 19:19:20.057214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:10.613 [2024-11-27 19:19:20.057223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:10.613 [2024-11-27 19:19:20.057238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:10.613 [2024-11-27 19:19:20.057267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:10.613 [2024-11-27 19:19:20.057294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:10.613 [2024-11-27 19:19:20.057315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:10.613 [2024-11-27 19:19:20.057338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:10.613 [2024-11-27 19:19:20.057359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.613 [2024-11-27 19:19:20.057375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:10.613 [2024-11-27 19:19:20.057383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:10.613 [2024-11-27 19:19:20.057396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:10.613 [2024-11-27 19:19:20.057405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:10.613 [2024-11-27 19:19:20.057412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:10.613 [2024-11-27 19:19:20.057422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:10.613 [2024-11-27 19:19:20.057437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:10.613 [2024-11-27 19:19:20.057443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057451] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:10.613 [2024-11-27 19:19:20.057461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:10.613 [2024-11-27 19:19:20.057472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:10.613 [2024-11-27 19:19:20.057490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:10.613 [2024-11-27 19:19:20.057497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:10.613 [2024-11-27 19:19:20.057505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:10.613 [2024-11-27 19:19:20.057512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:10.613 [2024-11-27 19:19:20.057520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:10.613 [2024-11-27 19:19:20.057527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:10.613 [2024-11-27 19:19:20.057537] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:10.613 [2024-11-27 19:19:20.057547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:10.613 [2024-11-27 19:19:20.057567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:10.613 [2024-11-27 19:19:20.057576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:10.613 [2024-11-27 19:19:20.057583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:10.613 [2024-11-27 19:19:20.057592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:10.613 [2024-11-27 19:19:20.057600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:10.613 [2024-11-27 19:19:20.057609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:10.613 [2024-11-27 19:19:20.057616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:10.613 [2024-11-27 19:19:20.057624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:10.613 [2024-11-27 19:19:20.057631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:10.613 [2024-11-27 19:19:20.057672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:10.613 [2024-11-27 19:19:20.057679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:10.613 [2024-11-27 19:19:20.057697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:10.613 [2024-11-27 19:19:20.057706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:10.613 [2024-11-27 19:19:20.057714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:10.613 [2024-11-27 19:19:20.057724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.613 [2024-11-27 19:19:20.057731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:10.613 [2024-11-27 19:19:20.057741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:21:10.613 [2024-11-27 19:19:20.057751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.613 [2024-11-27 19:19:20.092442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.613 [2024-11-27 19:19:20.092521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.614 [2024-11-27 19:19:20.092541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.624 ms 00:21:10.614 [2024-11-27 19:19:20.092555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.092780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.092795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:10.614 [2024-11-27 19:19:20.092808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:10.614 [2024-11-27 19:19:20.092816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.131679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.131743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.614 [2024-11-27 19:19:20.131760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.830 ms 00:21:10.614 [2024-11-27 19:19:20.131769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.131915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.131926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.614 [2024-11-27 19:19:20.131940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:10.614 [2024-11-27 19:19:20.131949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.132698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.132737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.614 [2024-11-27 19:19:20.132752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:21:10.614 [2024-11-27 19:19:20.132762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.132945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.132967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.614 [2024-11-27 19:19:20.132980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:10.614 [2024-11-27 19:19:20.132991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.153979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.154025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.614 [2024-11-27 19:19:20.154040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.957 ms 00:21:10.614 [2024-11-27 19:19:20.154049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.186181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:10.614 [2024-11-27 19:19:20.186471] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:10.614 [2024-11-27 19:19:20.186504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.186515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:10.614 [2024-11-27 19:19:20.186529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.287 ms 00:21:10.614 [2024-11-27 19:19:20.186546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.212587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.212638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:10.614 [2024-11-27 19:19:20.212655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.848 ms 00:21:10.614 [2024-11-27 19:19:20.212665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.225681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.225727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:10.614 [2024-11-27 19:19:20.225746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.907 ms 00:21:10.614 [2024-11-27 19:19:20.225754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.238492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.238537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:10.614 [2024-11-27 19:19:20.238553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.640 ms 00:21:10.614 [2024-11-27 19:19:20.238561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.614 [2024-11-27 19:19:20.239332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.614 [2024-11-27 19:19:20.239363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:10.614 [2024-11-27 19:19:20.239377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:21:10.614 [2024-11-27 19:19:20.239386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.311273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.311333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:10.875 [2024-11-27 19:19:20.311352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.852 ms 00:21:10.875 [2024-11-27 19:19:20.311361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.323110] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:10.875 [2024-11-27 19:19:20.347605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.347667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:10.875 [2024-11-27 19:19:20.347681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.140 ms 00:21:10.875 [2024-11-27 19:19:20.347692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.347821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.347836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:10.875 [2024-11-27 19:19:20.347847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:10.875 [2024-11-27 19:19:20.347858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.347927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.347940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:10.875 [2024-11-27 19:19:20.347949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:10.875 [2024-11-27 19:19:20.347962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.347992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.348007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:10.875 [2024-11-27 19:19:20.348015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:10.875 [2024-11-27 19:19:20.348025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.348068] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:10.875 [2024-11-27 19:19:20.348085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.348097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:10.875 [2024-11-27 19:19:20.348107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:10.875 [2024-11-27 19:19:20.348153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.375554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.375832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:10.875 [2024-11-27 19:19:20.375861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.367 ms 00:21:10.875 [2024-11-27 19:19:20.375872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.376117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.875 [2024-11-27 19:19:20.376161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:10.875 [2024-11-27 19:19:20.376179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:10.875 [2024-11-27 19:19:20.376188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.875 [2024-11-27 19:19:20.377499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:10.875 [2024-11-27 19:19:20.381177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.105 ms, result 0 00:21:10.875 [2024-11-27 19:19:20.383275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:10.875 Some configs were skipped because the RPC state that can call them passed over. 00:21:10.875 19:19:20 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:11.136 [2024-11-27 19:19:20.624020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.136 [2024-11-27 19:19:20.624232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:11.136 [2024-11-27 19:19:20.624299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:21:11.136 [2024-11-27 19:19:20.624329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.136 [2024-11-27 19:19:20.624388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.517 ms, result 0 00:21:11.136 true 00:21:11.136 19:19:20 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:11.397 [2024-11-27 19:19:20.835908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.397 [2024-11-27 19:19:20.836070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:11.397 [2024-11-27 19:19:20.836149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:21:11.397 [2024-11-27 19:19:20.836174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.397 [2024-11-27 19:19:20.836233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.129 ms, result 0 00:21:11.397 true 00:21:11.397 19:19:20 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77037 00:21:11.397 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77037 ']' 00:21:11.397 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77037 00:21:11.397 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77037 00:21:11.398 killing process with pid 77037 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77037' 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77037 00:21:11.398 19:19:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77037 00:21:11.966 [2024-11-27 19:19:21.586460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.966 [2024-11-27 19:19:21.586510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.966 [2024-11-27 19:19:21.586521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.966 [2024-11-27 19:19:21.586529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.966 [2024-11-27 19:19:21.586561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:11.966 [2024-11-27 19:19:21.588709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.966 [2024-11-27 19:19:21.588734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.966 [2024-11-27 19:19:21.588747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:21:11.966 [2024-11-27 19:19:21.588754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.966 [2024-11-27 19:19:21.588995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.966 [2024-11-27 19:19:21.589003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.966 [2024-11-27 19:19:21.589011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:11.966 [2024-11-27 19:19:21.589017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.966 [2024-11-27 19:19:21.592666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.966 [2024-11-27 19:19:21.592821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.966 [2024-11-27 19:19:21.592837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:21:11.966 [2024-11-27 19:19:21.592843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.966 [2024-11-27 19:19:21.598074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.966 [2024-11-27 19:19:21.598097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.966 [2024-11-27 19:19:21.598107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.195 ms 00:21:11.966 [2024-11-27 19:19:21.598114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.606403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.606432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:12.228 [2024-11-27 19:19:21.606443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.229 ms 00:21:12.228 [2024-11-27 19:19:21.606450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.613565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.613590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:12.228 [2024-11-27 19:19:21.613600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.082 ms 00:21:12.228 [2024-11-27 19:19:21.613607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.613717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.613724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:12.228 [2024-11-27 19:19:21.613732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:12.228 [2024-11-27 19:19:21.613738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.622201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.622223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:12.228 [2024-11-27 19:19:21.622232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:21:12.228 [2024-11-27 19:19:21.622238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.630460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.630490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:12.228 [2024-11-27 19:19:21.630501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.192 ms 00:21:12.228 [2024-11-27 19:19:21.630506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.638043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.638066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:12.228 [2024-11-27 19:19:21.638075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.506 ms 00:21:12.228 [2024-11-27 19:19:21.638081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.645763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.228 [2024-11-27 19:19:21.645785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:12.228 [2024-11-27 19:19:21.645794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.610 ms 00:21:12.228 [2024-11-27 19:19:21.645799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.228 [2024-11-27 19:19:21.645827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:12.228 [2024-11-27 19:19:21.645838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.645999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:12.228 [2024-11-27 19:19:21.646206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:12.229 [2024-11-27 19:19:21.646535] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:12.229 [2024-11-27 19:19:21.646545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:21:12.229 [2024-11-27 19:19:21.646554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:12.229 [2024-11-27 19:19:21.646561] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:12.229 [2024-11-27 19:19:21.646566] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:12.229 [2024-11-27 19:19:21.646574] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:12.229 [2024-11-27 19:19:21.646580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:12.229 [2024-11-27 19:19:21.646588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:12.229 [2024-11-27 19:19:21.646594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:12.229 [2024-11-27 19:19:21.646600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:12.229 [2024-11-27 19:19:21.646605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:12.229 [2024-11-27 19:19:21.646612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.229 [2024-11-27 19:19:21.646618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:12.229 [2024-11-27 19:19:21.646626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:21:12.229 [2024-11-27 19:19:21.646634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.656905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.229 [2024-11-27 19:19:21.656928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:12.229 [2024-11-27 19:19:21.656940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.255 ms 00:21:12.229 [2024-11-27 19:19:21.656946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.657282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.229 [2024-11-27 19:19:21.657292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:12.229 [2024-11-27 19:19:21.657302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:21:12.229 [2024-11-27 19:19:21.657310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.694259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.694284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.229 [2024-11-27 19:19:21.694294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.694300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.694382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.694391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.229 [2024-11-27 19:19:21.694401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.694407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.694442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.694450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.229 [2024-11-27 19:19:21.694460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.694468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.694482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.694490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.229 [2024-11-27 19:19:21.694498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.694505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.756899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.756932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.229 [2024-11-27 19:19:21.756942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.756950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.807291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.807491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.229 [2024-11-27 19:19:21.807510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.807517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.807590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.229 [2024-11-27 19:19:21.807599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.229 [2024-11-27 19:19:21.807610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.229 [2024-11-27 19:19:21.807616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.229 [2024-11-27 19:19:21.807642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.230 [2024-11-27 19:19:21.807649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.230 [2024-11-27 19:19:21.807657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.230 [2024-11-27 19:19:21.807663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.230 [2024-11-27 19:19:21.807745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.230 [2024-11-27 19:19:21.807754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.230 [2024-11-27 19:19:21.807762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.230 [2024-11-27 19:19:21.807768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.230 [2024-11-27 19:19:21.807797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.230 [2024-11-27 19:19:21.807804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:12.230 [2024-11-27 19:19:21.807813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.230 [2024-11-27 19:19:21.807819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.230 [2024-11-27 19:19:21.807856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.230 [2024-11-27 19:19:21.807862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.230 [2024-11-27 19:19:21.807872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.230 [2024-11-27 19:19:21.807877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.230 [2024-11-27 19:19:21.807918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.230 [2024-11-27 19:19:21.807927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.230 [2024-11-27 19:19:21.807935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.230 [2024-11-27 19:19:21.807940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.230 [2024-11-27 19:19:21.808071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 221.586 ms, result 0 00:21:13.172 19:19:22 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:13.172 [2024-11-27 19:19:22.574005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:13.172 [2024-11-27 19:19:22.574434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77095 ] 00:21:13.172 [2024-11-27 19:19:22.735293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.432 [2024-11-27 19:19:22.824710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.432 [2024-11-27 19:19:23.035313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:13.432 [2024-11-27 19:19:23.035368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:13.693 [2024-11-27 19:19:23.187241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.187278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:13.693 [2024-11-27 19:19:23.187289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:13.693 [2024-11-27 19:19:23.187295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.189360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.189391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:13.693 [2024-11-27 19:19:23.189399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:21:13.693 [2024-11-27 19:19:23.189405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.189463] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:13.693 [2024-11-27 19:19:23.190023] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:13.693 [2024-11-27 19:19:23.190052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.190058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:13.693 [2024-11-27 19:19:23.190065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:21:13.693 [2024-11-27 19:19:23.190071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.191095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:13.693 [2024-11-27 19:19:23.200690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.200842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:13.693 [2024-11-27 19:19:23.200857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.596 ms 00:21:13.693 [2024-11-27 19:19:23.200863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.200933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.200942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:13.693 [2024-11-27 19:19:23.200948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:13.693 [2024-11-27 19:19:23.200954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.205448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.205473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:13.693 [2024-11-27 19:19:23.205480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:21:13.693 [2024-11-27 19:19:23.205485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.205561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.205570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:13.693 [2024-11-27 19:19:23.205577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:13.693 [2024-11-27 19:19:23.205582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.693 [2024-11-27 19:19:23.205599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.693 [2024-11-27 19:19:23.205606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:13.693 [2024-11-27 19:19:23.205611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:13.693 [2024-11-27 19:19:23.205617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-11-27 19:19:23.205634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:13.694 [2024-11-27 19:19:23.208392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-11-27 19:19:23.208414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:13.694 [2024-11-27 19:19:23.208422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:21:13.694 [2024-11-27 19:19:23.208428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-11-27 19:19:23.208457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-11-27 19:19:23.208463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:13.694 [2024-11-27 19:19:23.208470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:13.694 [2024-11-27 19:19:23.208475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-11-27 19:19:23.208490] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:13.694 [2024-11-27 19:19:23.208504] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:13.694 [2024-11-27 19:19:23.208530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:13.694 [2024-11-27 19:19:23.208542] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:13.694 [2024-11-27 19:19:23.208620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:13.694 [2024-11-27 19:19:23.208629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:13.694 [2024-11-27 19:19:23.208637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:13.694 [2024-11-27 19:19:23.208646] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:13.694 [2024-11-27 19:19:23.208665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:13.694 [2024-11-27 19:19:23.208671] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:13.694 [2024-11-27 19:19:23.208676] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:13.694 [2024-11-27 19:19:23.208682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-11-27 19:19:23.208688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:13.694 [2024-11-27 19:19:23.208694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:21:13.694 [2024-11-27 19:19:23.208699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-11-27 19:19:23.208766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.694 [2024-11-27 19:19:23.208774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:13.694 [2024-11-27 19:19:23.208780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:13.694 [2024-11-27 19:19:23.208785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.694 [2024-11-27 19:19:23.208860] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:13.694 [2024-11-27 19:19:23.208867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:13.694 [2024-11-27 19:19:23.208873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:13.694 [2024-11-27 19:19:23.208890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:13.694 [2024-11-27 19:19:23.208907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:13.694 [2024-11-27 19:19:23.208918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:13.694 [2024-11-27 19:19:23.208928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:13.694 [2024-11-27 19:19:23.208933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:13.694 [2024-11-27 19:19:23.208938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:13.694 [2024-11-27 19:19:23.208943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:13.694 [2024-11-27 19:19:23.208950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:13.694 [2024-11-27 19:19:23.208960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:13.694 [2024-11-27 19:19:23.208974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:13.694 [2024-11-27 19:19:23.208990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:13.694 [2024-11-27 19:19:23.208994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.694 [2024-11-27 19:19:23.208999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:13.694 [2024-11-27 19:19:23.209004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.694 [2024-11-27 19:19:23.209014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:13.694 [2024-11-27 19:19:23.209019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:13.694 [2024-11-27 19:19:23.209028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:13.694 [2024-11-27 19:19:23.209033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:13.694 [2024-11-27 19:19:23.209043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:13.694 [2024-11-27 19:19:23.209047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:13.694 [2024-11-27 19:19:23.209052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:13.694 [2024-11-27 19:19:23.209057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:13.694 [2024-11-27 19:19:23.209062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:13.694 [2024-11-27 19:19:23.209067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:13.694 [2024-11-27 19:19:23.209077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:13.694 [2024-11-27 19:19:23.209082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209087] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:13.694 [2024-11-27 19:19:23.209093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:13.694 [2024-11-27 19:19:23.209100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:13.694 [2024-11-27 19:19:23.209105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:13.694 [2024-11-27 19:19:23.209112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:13.694 [2024-11-27 19:19:23.209117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:13.694 [2024-11-27 19:19:23.209139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:13.694 [2024-11-27 19:19:23.209145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:13.694 [2024-11-27 19:19:23.209150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:13.694 [2024-11-27 19:19:23.209156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:13.694 [2024-11-27 19:19:23.209162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:13.694 [2024-11-27 19:19:23.209169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:13.694 [2024-11-27 19:19:23.209175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:13.694 [2024-11-27 19:19:23.209181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:13.694 [2024-11-27 19:19:23.209186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:13.694 [2024-11-27 19:19:23.209192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:13.694 [2024-11-27 19:19:23.209197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:13.694 [2024-11-27 19:19:23.209202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:13.695 [2024-11-27 19:19:23.209208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:13.695 [2024-11-27 19:19:23.209213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:13.695 [2024-11-27 19:19:23.209218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:13.695 [2024-11-27 19:19:23.209224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:13.695 [2024-11-27 19:19:23.209251] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:13.695 [2024-11-27 19:19:23.209256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:13.695 [2024-11-27 19:19:23.209269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:13.695 [2024-11-27 19:19:23.209274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:13.695 [2024-11-27 19:19:23.209280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:13.695 [2024-11-27 19:19:23.209285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.209293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:13.695 [2024-11-27 19:19:23.209298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:21:13.695 [2024-11-27 19:19:23.209304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.230246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.230272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.695 [2024-11-27 19:19:23.230280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.902 ms 00:21:13.695 [2024-11-27 19:19:23.230286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.230379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.230387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:13.695 [2024-11-27 19:19:23.230393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:13.695 [2024-11-27 19:19:23.230399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.268560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.268682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.695 [2024-11-27 19:19:23.268701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.145 ms 00:21:13.695 [2024-11-27 19:19:23.268708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.268767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.268775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.695 [2024-11-27 19:19:23.268782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:13.695 [2024-11-27 19:19:23.268788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.269097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.269109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.695 [2024-11-27 19:19:23.269116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:13.695 [2024-11-27 19:19:23.269145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.269250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.269258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.695 [2024-11-27 19:19:23.269265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:13.695 [2024-11-27 19:19:23.269270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.280179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.280204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.695 [2024-11-27 19:19:23.280212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.893 ms 00:21:13.695 [2024-11-27 19:19:23.280222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.289919] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:13.695 [2024-11-27 19:19:23.289947] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:13.695 [2024-11-27 19:19:23.289956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.289962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:13.695 [2024-11-27 19:19:23.289968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.649 ms 00:21:13.695 [2024-11-27 19:19:23.289974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.308493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.308524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:13.695 [2024-11-27 19:19:23.308533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:21:13.695 [2024-11-27 19:19:23.308540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.317409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.317437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:13.695 [2024-11-27 19:19:23.317445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.814 ms 00:21:13.695 [2024-11-27 19:19:23.317450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.326026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.326053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:13.695 [2024-11-27 19:19:23.326061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.533 ms 00:21:13.695 [2024-11-27 19:19:23.326066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.695 [2024-11-27 19:19:23.326545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.695 [2024-11-27 19:19:23.326565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:13.695 [2024-11-27 19:19:23.326572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:21:13.695 [2024-11-27 19:19:23.326578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.370410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.370450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:13.957 [2024-11-27 19:19:23.370460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.814 ms 00:21:13.957 [2024-11-27 19:19:23.370467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.378399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:13.957 [2024-11-27 19:19:23.390105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.390144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:13.957 [2024-11-27 19:19:23.390154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.573 ms 00:21:13.957 [2024-11-27 19:19:23.390165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.390239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.390248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:13.957 [2024-11-27 19:19:23.390254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:13.957 [2024-11-27 19:19:23.390260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.390296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.390303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:13.957 [2024-11-27 19:19:23.390309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:13.957 [2024-11-27 19:19:23.390318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.390343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.390350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:13.957 [2024-11-27 19:19:23.390356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:13.957 [2024-11-27 19:19:23.390362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.390386] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:13.957 [2024-11-27 19:19:23.390394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.390399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:13.957 [2024-11-27 19:19:23.390406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:13.957 [2024-11-27 19:19:23.390412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.408557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.408585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:13.957 [2024-11-27 19:19:23.408594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:21:13.957 [2024-11-27 19:19:23.408600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.408674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.957 [2024-11-27 19:19:23.408683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:13.957 [2024-11-27 19:19:23.408689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:13.957 [2024-11-27 19:19:23.408696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.957 [2024-11-27 19:19:23.409338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:13.957 [2024-11-27 19:19:23.411693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.859 ms, result 0 00:21:13.957 [2024-11-27 19:19:23.412534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.957 [2024-11-27 19:19:23.427324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:14.897  [2024-11-27T19:19:25.474Z] Copying: 21/256 [MB] (21 MBps) [2024-11-27T19:19:26.857Z] Copying: 32/256 [MB] (11 MBps) [2024-11-27T19:19:27.799Z] Copying: 47/256 [MB] (15 MBps) [2024-11-27T19:19:28.742Z] Copying: 66/256 [MB] (18 MBps) [2024-11-27T19:19:29.686Z] Copying: 79/256 [MB] (12 MBps) [2024-11-27T19:19:30.631Z] Copying: 96/256 [MB] (17 MBps) [2024-11-27T19:19:31.576Z] Copying: 114/256 [MB] (18 MBps) [2024-11-27T19:19:32.520Z] Copying: 127/256 [MB] (12 MBps) [2024-11-27T19:19:33.462Z] Copying: 149/256 [MB] (21 MBps) [2024-11-27T19:19:34.850Z] Copying: 172/256 [MB] (23 MBps) [2024-11-27T19:19:35.791Z] Copying: 194/256 [MB] (22 MBps) [2024-11-27T19:19:36.735Z] Copying: 213/256 [MB] (18 MBps) [2024-11-27T19:19:37.680Z] Copying: 234/256 [MB] (21 MBps) [2024-11-27T19:19:37.680Z] Copying: 253/256 [MB] (18 MBps) [2024-11-27T19:19:37.940Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-27 19:19:37.929088] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:28.566 [2024-11-27 19:19:37.941829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.566 [2024-11-27 19:19:37.941893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:28.566 [2024-11-27 19:19:37.941925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:28.566 [2024-11-27 19:19:37.941935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.566 [2024-11-27 19:19:37.941968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:28.566 [2024-11-27 19:19:37.945408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:37.945461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:28.567 [2024-11-27 19:19:37.945476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.420 ms 00:21:28.567 [2024-11-27 19:19:37.945596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:37.945924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:37.945938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:28.567 [2024-11-27 19:19:37.945950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:28.567 [2024-11-27 19:19:37.945959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:37.949713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:37.949745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:28.567 [2024-11-27 19:19:37.949756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:21:28.567 [2024-11-27 19:19:37.949766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:37.957296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:37.957366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:28.567 [2024-11-27 19:19:37.957381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.507 ms 00:21:28.567 [2024-11-27 19:19:37.957390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:37.985018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:37.985073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:28.567 [2024-11-27 19:19:37.985088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.537 ms 00:21:28.567 [2024-11-27 19:19:37.985098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.003655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.003965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:28.567 [2024-11-27 19:19:38.004002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:21:28.567 [2024-11-27 19:19:38.004011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.004206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.004223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:28.567 [2024-11-27 19:19:38.004247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:28.567 [2024-11-27 19:19:38.004256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.031178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.031380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:28.567 [2024-11-27 19:19:38.031403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.900 ms 00:21:28.567 [2024-11-27 19:19:38.031413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.057166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.057213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:28.567 [2024-11-27 19:19:38.057227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.667 ms 00:21:28.567 [2024-11-27 19:19:38.057237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.082546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.082754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:28.567 [2024-11-27 19:19:38.082777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.237 ms 00:21:28.567 [2024-11-27 19:19:38.082786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.108149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.567 [2024-11-27 19:19:38.108198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:28.567 [2024-11-27 19:19:38.108211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 00:21:28.567 [2024-11-27 19:19:38.108219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.567 [2024-11-27 19:19:38.108287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:28.567 [2024-11-27 19:19:38.108305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:28.567 [2024-11-27 19:19:38.108744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.108995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:28.568 [2024-11-27 19:19:38.109181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:28.568 [2024-11-27 19:19:38.109191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8b2cde32-d394-44a9-aa28-c467f1c4b33d 00:21:28.568 [2024-11-27 19:19:38.109201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:28.568 [2024-11-27 19:19:38.109209] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:28.568 [2024-11-27 19:19:38.109219] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:28.568 [2024-11-27 19:19:38.109228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:28.568 [2024-11-27 19:19:38.109237] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:28.568 [2024-11-27 19:19:38.109248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:28.568 [2024-11-27 19:19:38.109261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:28.568 [2024-11-27 19:19:38.109268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:28.568 [2024-11-27 19:19:38.109275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:28.568 [2024-11-27 19:19:38.109284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.568 [2024-11-27 19:19:38.109292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:28.568 [2024-11-27 19:19:38.109301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:21:28.568 [2024-11-27 19:19:38.109338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.124087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.568 [2024-11-27 19:19:38.124151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:28.568 [2024-11-27 19:19:38.124164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:21:28.568 [2024-11-27 19:19:38.124173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.124645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.568 [2024-11-27 19:19:38.124670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:28.568 [2024-11-27 19:19:38.124682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:28.568 [2024-11-27 19:19:38.124690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.167189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.568 [2024-11-27 19:19:38.167402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:28.568 [2024-11-27 19:19:38.167424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.568 [2024-11-27 19:19:38.167443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.167568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.568 [2024-11-27 19:19:38.167582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:28.568 [2024-11-27 19:19:38.167592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.568 [2024-11-27 19:19:38.167602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.167660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.568 [2024-11-27 19:19:38.167673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:28.568 [2024-11-27 19:19:38.167682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.568 [2024-11-27 19:19:38.167691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.568 [2024-11-27 19:19:38.167716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.568 [2024-11-27 19:19:38.167726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:28.568 [2024-11-27 19:19:38.167737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.568 [2024-11-27 19:19:38.167745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.262517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.262786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:28.829 [2024-11-27 19:19:38.262812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.262823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.335689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.335729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:28.829 [2024-11-27 19:19:38.335741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.335749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.335827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.335838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.829 [2024-11-27 19:19:38.335847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.335854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.335884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.335896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.829 [2024-11-27 19:19:38.335904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.335912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.336007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.336019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.829 [2024-11-27 19:19:38.336028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.336035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.336066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.336077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:28.829 [2024-11-27 19:19:38.336088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.336095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.336162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.336172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.829 [2024-11-27 19:19:38.336181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.336190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.336236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.829 [2024-11-27 19:19:38.336250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.829 [2024-11-27 19:19:38.336258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.829 [2024-11-27 19:19:38.336267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.829 [2024-11-27 19:19:38.336416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.599 ms, result 0 00:21:29.781 00:21:29.781 00:21:29.781 19:19:39 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:30.106 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:30.106 19:19:39 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:30.106 19:19:39 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:30.106 19:19:39 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:30.106 19:19:39 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:30.106 19:19:39 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:30.367 19:19:39 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:30.367 Process with pid 77037 is not found 00:21:30.367 19:19:39 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77037 00:21:30.367 19:19:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77037 ']' 00:21:30.367 19:19:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77037 00:21:30.367 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77037) - No such process 00:21:30.367 19:19:39 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77037 is not found' 00:21:30.367 ************************************ 00:21:30.367 END TEST ftl_trim 00:21:30.367 ************************************ 00:21:30.367 00:21:30.367 real 1m18.738s 00:21:30.367 user 1m32.997s 00:21:30.367 sys 0m10.669s 00:21:30.367 19:19:39 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.367 19:19:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:30.367 19:19:39 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:30.367 19:19:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:30.367 19:19:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.367 19:19:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:30.367 ************************************ 00:21:30.367 START TEST ftl_restore 00:21:30.367 ************************************ 00:21:30.367 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:30.367 * Looking for test storage... 00:21:30.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.367 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.367 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.367 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:30.367 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.367 19:19:39 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.368 19:19:39 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:30.368 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.368 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.368 --rc genhtml_branch_coverage=1 00:21:30.368 --rc genhtml_function_coverage=1 00:21:30.368 --rc genhtml_legend=1 00:21:30.368 --rc geninfo_all_blocks=1 00:21:30.368 --rc geninfo_unexecuted_blocks=1 00:21:30.368 00:21:30.368 ' 00:21:30.368 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.368 --rc genhtml_branch_coverage=1 00:21:30.368 --rc genhtml_function_coverage=1 00:21:30.368 --rc genhtml_legend=1 00:21:30.368 --rc geninfo_all_blocks=1 00:21:30.368 --rc geninfo_unexecuted_blocks=1 00:21:30.368 00:21:30.368 ' 00:21:30.368 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.368 --rc genhtml_branch_coverage=1 00:21:30.368 --rc genhtml_function_coverage=1 00:21:30.368 --rc genhtml_legend=1 00:21:30.368 --rc geninfo_all_blocks=1 00:21:30.368 --rc geninfo_unexecuted_blocks=1 00:21:30.368 00:21:30.368 ' 00:21:30.368 19:19:39 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.368 --rc genhtml_branch_coverage=1 00:21:30.368 --rc genhtml_function_coverage=1 00:21:30.368 --rc genhtml_legend=1 00:21:30.368 --rc geninfo_all_blocks=1 00:21:30.368 --rc geninfo_unexecuted_blocks=1 00:21:30.368 00:21:30.368 ' 00:21:30.368 19:19:39 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:30.368 19:19:39 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:30.368 19:19:39 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.368 19:19:39 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:30.368 19:19:39 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:30.629 19:19:39 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:30.629 19:19:39 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.629 19:19:39 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:30.629 19:19:39 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:30.629 19:19:40 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.kF1Lse0D8v 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77339 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77339 00:21:30.630 19:19:40 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77339 ']' 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.630 19:19:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:30.630 [2024-11-27 19:19:40.105832] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:30.630 [2024-11-27 19:19:40.105994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77339 ] 00:21:30.890 [2024-11-27 19:19:40.268227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.890 [2024-11-27 19:19:40.390792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.462 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.462 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:31.462 19:19:41 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:32.034 19:19:41 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:32.034 19:19:41 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:32.034 19:19:41 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:32.034 { 00:21:32.034 "name": "nvme0n1", 00:21:32.034 "aliases": [ 00:21:32.034 "8c188160-3b3b-40c6-a7a5-7ff20e26ff3c" 00:21:32.034 ], 00:21:32.034 "product_name": "NVMe disk", 00:21:32.034 "block_size": 4096, 00:21:32.034 "num_blocks": 1310720, 00:21:32.034 "uuid": "8c188160-3b3b-40c6-a7a5-7ff20e26ff3c", 00:21:32.034 "numa_id": -1, 00:21:32.034 "assigned_rate_limits": { 00:21:32.034 "rw_ios_per_sec": 0, 00:21:32.034 "rw_mbytes_per_sec": 0, 00:21:32.034 "r_mbytes_per_sec": 0, 00:21:32.034 "w_mbytes_per_sec": 0 00:21:32.034 }, 00:21:32.034 "claimed": true, 00:21:32.034 "claim_type": "read_many_write_one", 00:21:32.034 "zoned": false, 00:21:32.034 "supported_io_types": { 00:21:32.034 "read": true, 00:21:32.034 "write": true, 00:21:32.034 "unmap": true, 00:21:32.034 "flush": true, 00:21:32.034 "reset": true, 00:21:32.034 "nvme_admin": true, 00:21:32.034 "nvme_io": true, 00:21:32.034 "nvme_io_md": false, 00:21:32.034 "write_zeroes": true, 00:21:32.034 "zcopy": false, 00:21:32.034 "get_zone_info": false, 00:21:32.034 "zone_management": false, 00:21:32.034 "zone_append": false, 00:21:32.034 "compare": true, 00:21:32.034 "compare_and_write": false, 00:21:32.034 "abort": true, 00:21:32.034 "seek_hole": false, 00:21:32.034 "seek_data": false, 00:21:32.034 "copy": true, 00:21:32.034 "nvme_iov_md": false 00:21:32.034 }, 00:21:32.034 "driver_specific": { 00:21:32.034 "nvme": [ 00:21:32.034 { 00:21:32.034 "pci_address": "0000:00:11.0", 00:21:32.034 "trid": { 00:21:32.034 "trtype": "PCIe", 00:21:32.034 "traddr": "0000:00:11.0" 00:21:32.034 }, 00:21:32.034 "ctrlr_data": { 00:21:32.034 "cntlid": 0, 00:21:32.034 "vendor_id": "0x1b36", 00:21:32.034 "model_number": "QEMU NVMe Ctrl", 00:21:32.034 "serial_number": "12341", 00:21:32.034 "firmware_revision": "8.0.0", 00:21:32.034 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:32.034 "oacs": { 00:21:32.034 "security": 0, 00:21:32.034 "format": 1, 00:21:32.034 "firmware": 0, 00:21:32.034 "ns_manage": 1 00:21:32.034 }, 00:21:32.034 "multi_ctrlr": false, 00:21:32.034 "ana_reporting": false 00:21:32.034 }, 00:21:32.034 "vs": { 00:21:32.034 "nvme_version": "1.4" 00:21:32.034 }, 00:21:32.034 "ns_data": { 00:21:32.034 "id": 1, 00:21:32.034 "can_share": false 00:21:32.034 } 00:21:32.034 } 00:21:32.034 ], 00:21:32.034 "mp_policy": "active_passive" 00:21:32.034 } 00:21:32.034 } 00:21:32.034 ]' 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:32.034 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:32.296 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:32.296 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:32.296 19:19:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d88d8a72-af28-41bc-9926-88ccc7a3a5d2 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:32.296 19:19:41 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d88d8a72-af28-41bc-9926-88ccc7a3a5d2 00:21:32.556 19:19:42 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:32.816 19:19:42 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=243a8426-2a59-4829-b5b3-4d5cb01d4a65 00:21:32.816 19:19:42 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 243a8426-2a59-4829-b5b3-4d5cb01d4a65 00:21:33.078 19:19:42 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.078 19:19:42 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:33.078 19:19:42 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.078 19:19:42 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:33.079 19:19:42 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:33.079 19:19:42 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.079 19:19:42 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:33.079 19:19:42 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.079 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.079 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:33.079 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:33.079 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:33.079 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:33.340 { 00:21:33.340 "name": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:33.340 "aliases": [ 00:21:33.340 "lvs/nvme0n1p0" 00:21:33.340 ], 00:21:33.340 "product_name": "Logical Volume", 00:21:33.340 "block_size": 4096, 00:21:33.340 "num_blocks": 26476544, 00:21:33.340 "uuid": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:33.340 "assigned_rate_limits": { 00:21:33.340 "rw_ios_per_sec": 0, 00:21:33.340 "rw_mbytes_per_sec": 0, 00:21:33.340 "r_mbytes_per_sec": 0, 00:21:33.340 "w_mbytes_per_sec": 0 00:21:33.340 }, 00:21:33.340 "claimed": false, 00:21:33.340 "zoned": false, 00:21:33.340 "supported_io_types": { 00:21:33.340 "read": true, 00:21:33.340 "write": true, 00:21:33.340 "unmap": true, 00:21:33.340 "flush": false, 00:21:33.340 "reset": true, 00:21:33.340 "nvme_admin": false, 00:21:33.340 "nvme_io": false, 00:21:33.340 "nvme_io_md": false, 00:21:33.340 "write_zeroes": true, 00:21:33.340 "zcopy": false, 00:21:33.340 "get_zone_info": false, 00:21:33.340 "zone_management": false, 00:21:33.340 "zone_append": false, 00:21:33.340 "compare": false, 00:21:33.340 "compare_and_write": false, 00:21:33.340 "abort": false, 00:21:33.340 "seek_hole": true, 00:21:33.340 "seek_data": true, 00:21:33.340 "copy": false, 00:21:33.340 "nvme_iov_md": false 00:21:33.340 }, 00:21:33.340 "driver_specific": { 00:21:33.340 "lvol": { 00:21:33.340 "lvol_store_uuid": "243a8426-2a59-4829-b5b3-4d5cb01d4a65", 00:21:33.340 "base_bdev": "nvme0n1", 00:21:33.340 "thin_provision": true, 00:21:33.340 "num_allocated_clusters": 0, 00:21:33.340 "snapshot": false, 00:21:33.340 "clone": false, 00:21:33.340 "esnap_clone": false 00:21:33.340 } 00:21:33.340 } 00:21:33.340 } 00:21:33.340 ]' 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:33.340 19:19:42 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:33.340 19:19:42 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:33.340 19:19:42 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:33.340 19:19:42 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:33.601 19:19:43 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:33.601 19:19:43 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:33.601 19:19:43 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.601 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.601 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:33.601 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:33.601 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:33.601 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:33.862 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:33.863 { 00:21:33.863 "name": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:33.863 "aliases": [ 00:21:33.863 "lvs/nvme0n1p0" 00:21:33.863 ], 00:21:33.863 "product_name": "Logical Volume", 00:21:33.863 "block_size": 4096, 00:21:33.863 "num_blocks": 26476544, 00:21:33.863 "uuid": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:33.863 "assigned_rate_limits": { 00:21:33.863 "rw_ios_per_sec": 0, 00:21:33.863 "rw_mbytes_per_sec": 0, 00:21:33.863 "r_mbytes_per_sec": 0, 00:21:33.863 "w_mbytes_per_sec": 0 00:21:33.863 }, 00:21:33.863 "claimed": false, 00:21:33.863 "zoned": false, 00:21:33.863 "supported_io_types": { 00:21:33.863 "read": true, 00:21:33.863 "write": true, 00:21:33.863 "unmap": true, 00:21:33.863 "flush": false, 00:21:33.863 "reset": true, 00:21:33.863 "nvme_admin": false, 00:21:33.863 "nvme_io": false, 00:21:33.863 "nvme_io_md": false, 00:21:33.863 "write_zeroes": true, 00:21:33.863 "zcopy": false, 00:21:33.863 "get_zone_info": false, 00:21:33.863 "zone_management": false, 00:21:33.863 "zone_append": false, 00:21:33.863 "compare": false, 00:21:33.863 "compare_and_write": false, 00:21:33.863 "abort": false, 00:21:33.863 "seek_hole": true, 00:21:33.863 "seek_data": true, 00:21:33.863 "copy": false, 00:21:33.863 "nvme_iov_md": false 00:21:33.863 }, 00:21:33.863 "driver_specific": { 00:21:33.863 "lvol": { 00:21:33.863 "lvol_store_uuid": "243a8426-2a59-4829-b5b3-4d5cb01d4a65", 00:21:33.863 "base_bdev": "nvme0n1", 00:21:33.863 "thin_provision": true, 00:21:33.863 "num_allocated_clusters": 0, 00:21:33.863 "snapshot": false, 00:21:33.863 "clone": false, 00:21:33.863 "esnap_clone": false 00:21:33.863 } 00:21:33.863 } 00:21:33.863 } 00:21:33.863 ]' 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:33.863 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:33.863 19:19:43 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:33.863 19:19:43 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:34.124 19:19:43 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:34.124 19:19:43 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:34.124 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:34.124 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:34.124 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:34.124 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:34.124 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee07b787-04b5-4b3c-b757-11e40a56008b 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:34.385 { 00:21:34.385 "name": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:34.385 "aliases": [ 00:21:34.385 "lvs/nvme0n1p0" 00:21:34.385 ], 00:21:34.385 "product_name": "Logical Volume", 00:21:34.385 "block_size": 4096, 00:21:34.385 "num_blocks": 26476544, 00:21:34.385 "uuid": "ee07b787-04b5-4b3c-b757-11e40a56008b", 00:21:34.385 "assigned_rate_limits": { 00:21:34.385 "rw_ios_per_sec": 0, 00:21:34.385 "rw_mbytes_per_sec": 0, 00:21:34.385 "r_mbytes_per_sec": 0, 00:21:34.385 "w_mbytes_per_sec": 0 00:21:34.385 }, 00:21:34.385 "claimed": false, 00:21:34.385 "zoned": false, 00:21:34.385 "supported_io_types": { 00:21:34.385 "read": true, 00:21:34.385 "write": true, 00:21:34.385 "unmap": true, 00:21:34.385 "flush": false, 00:21:34.385 "reset": true, 00:21:34.385 "nvme_admin": false, 00:21:34.385 "nvme_io": false, 00:21:34.385 "nvme_io_md": false, 00:21:34.385 "write_zeroes": true, 00:21:34.385 "zcopy": false, 00:21:34.385 "get_zone_info": false, 00:21:34.385 "zone_management": false, 00:21:34.385 "zone_append": false, 00:21:34.385 "compare": false, 00:21:34.385 "compare_and_write": false, 00:21:34.385 "abort": false, 00:21:34.385 "seek_hole": true, 00:21:34.385 "seek_data": true, 00:21:34.385 "copy": false, 00:21:34.385 "nvme_iov_md": false 00:21:34.385 }, 00:21:34.385 "driver_specific": { 00:21:34.385 "lvol": { 00:21:34.385 "lvol_store_uuid": "243a8426-2a59-4829-b5b3-4d5cb01d4a65", 00:21:34.385 "base_bdev": "nvme0n1", 00:21:34.385 "thin_provision": true, 00:21:34.385 "num_allocated_clusters": 0, 00:21:34.385 "snapshot": false, 00:21:34.385 "clone": false, 00:21:34.385 "esnap_clone": false 00:21:34.385 } 00:21:34.385 } 00:21:34.385 } 00:21:34.385 ]' 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:34.385 19:19:43 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ee07b787-04b5-4b3c-b757-11e40a56008b --l2p_dram_limit 10' 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:34.385 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:34.385 19:19:43 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ee07b787-04b5-4b3c-b757-11e40a56008b --l2p_dram_limit 10 -c nvc0n1p0 00:21:34.647 [2024-11-27 19:19:44.027830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.027870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.647 [2024-11-27 19:19:44.027882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.647 [2024-11-27 19:19:44.027889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.027941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.027949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.647 [2024-11-27 19:19:44.027957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:34.647 [2024-11-27 19:19:44.027963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.027979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.647 [2024-11-27 19:19:44.028630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.647 [2024-11-27 19:19:44.028660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.028666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.647 [2024-11-27 19:19:44.028675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:21:34.647 [2024-11-27 19:19:44.028681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.028917] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a739b9e0-5d65-489d-af21-aa598e8a13af 00:21:34.647 [2024-11-27 19:19:44.029868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.029901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:34.647 [2024-11-27 19:19:44.029910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:34.647 [2024-11-27 19:19:44.029920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.034647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.034678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.647 [2024-11-27 19:19:44.034685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:21:34.647 [2024-11-27 19:19:44.034692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.034774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.034784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.647 [2024-11-27 19:19:44.034791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:34.647 [2024-11-27 19:19:44.034800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.647 [2024-11-27 19:19:44.034845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.647 [2024-11-27 19:19:44.034855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.647 [2024-11-27 19:19:44.034863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:34.647 [2024-11-27 19:19:44.034869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.648 [2024-11-27 19:19:44.034888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.648 [2024-11-27 19:19:44.037702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.648 [2024-11-27 19:19:44.037728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.648 [2024-11-27 19:19:44.037738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:21:34.648 [2024-11-27 19:19:44.037744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.648 [2024-11-27 19:19:44.037770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.648 [2024-11-27 19:19:44.037776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.648 [2024-11-27 19:19:44.037784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:34.648 [2024-11-27 19:19:44.037790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.648 [2024-11-27 19:19:44.037809] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:34.648 [2024-11-27 19:19:44.037914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.648 [2024-11-27 19:19:44.037927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.648 [2024-11-27 19:19:44.037935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.648 [2024-11-27 19:19:44.037944] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.648 [2024-11-27 19:19:44.037951] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.648 [2024-11-27 19:19:44.037958] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:34.648 [2024-11-27 19:19:44.037966] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.648 [2024-11-27 19:19:44.037972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.648 [2024-11-27 19:19:44.037978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.648 [2024-11-27 19:19:44.037985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.648 [2024-11-27 19:19:44.037995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.648 [2024-11-27 19:19:44.038002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:21:34.648 [2024-11-27 19:19:44.038008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.648 [2024-11-27 19:19:44.038073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.648 [2024-11-27 19:19:44.038080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.648 [2024-11-27 19:19:44.038087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:34.648 [2024-11-27 19:19:44.038092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.648 [2024-11-27 19:19:44.038179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.648 [2024-11-27 19:19:44.038192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.648 [2024-11-27 19:19:44.038200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.648 [2024-11-27 19:19:44.038219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.648 [2024-11-27 19:19:44.038237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.648 [2024-11-27 19:19:44.038249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.648 [2024-11-27 19:19:44.038254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:34.648 [2024-11-27 19:19:44.038260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.648 [2024-11-27 19:19:44.038265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.648 [2024-11-27 19:19:44.038271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:34.648 [2024-11-27 19:19:44.038276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.648 [2024-11-27 19:19:44.038289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.648 [2024-11-27 19:19:44.038310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.648 [2024-11-27 19:19:44.038327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.648 [2024-11-27 19:19:44.038345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.648 [2024-11-27 19:19:44.038361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.648 [2024-11-27 19:19:44.038379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.648 [2024-11-27 19:19:44.038390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.648 [2024-11-27 19:19:44.038395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:34.648 [2024-11-27 19:19:44.038401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.648 [2024-11-27 19:19:44.038406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.648 [2024-11-27 19:19:44.038412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:34.648 [2024-11-27 19:19:44.038417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.648 [2024-11-27 19:19:44.038428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:34.648 [2024-11-27 19:19:44.038434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038438] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.648 [2024-11-27 19:19:44.038445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.648 [2024-11-27 19:19:44.038451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.648 [2024-11-27 19:19:44.038464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.648 [2024-11-27 19:19:44.038472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.648 [2024-11-27 19:19:44.038478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.648 [2024-11-27 19:19:44.038486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.648 [2024-11-27 19:19:44.038491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.648 [2024-11-27 19:19:44.038498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.648 [2024-11-27 19:19:44.038506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.648 [2024-11-27 19:19:44.038515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.648 [2024-11-27 19:19:44.038522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:34.648 [2024-11-27 19:19:44.038529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:34.648 [2024-11-27 19:19:44.038534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:34.648 [2024-11-27 19:19:44.038541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:34.648 [2024-11-27 19:19:44.038546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:34.648 [2024-11-27 19:19:44.038553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:34.648 [2024-11-27 19:19:44.038559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:34.648 [2024-11-27 19:19:44.038565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:34.648 [2024-11-27 19:19:44.038571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:34.648 [2024-11-27 19:19:44.038578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:34.648 [2024-11-27 19:19:44.038584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:34.648 [2024-11-27 19:19:44.038590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:34.648 [2024-11-27 19:19:44.038595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:34.648 [2024-11-27 19:19:44.038603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:34.648 [2024-11-27 19:19:44.038608] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.648 [2024-11-27 19:19:44.038615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.649 [2024-11-27 19:19:44.038621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.649 [2024-11-27 19:19:44.038628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.649 [2024-11-27 19:19:44.038633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.649 [2024-11-27 19:19:44.038639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.649 [2024-11-27 19:19:44.038645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.649 [2024-11-27 19:19:44.038652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.649 [2024-11-27 19:19:44.038658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:21:34.649 [2024-11-27 19:19:44.038664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.649 [2024-11-27 19:19:44.038693] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:34.649 [2024-11-27 19:19:44.038703] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:38.855 [2024-11-27 19:19:48.397342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.397432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:38.855 [2024-11-27 19:19:48.397451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4358.630 ms 00:21:38.855 [2024-11-27 19:19:48.397464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.430623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.430698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.855 [2024-11-27 19:19:48.430727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.904 ms 00:21:38.855 [2024-11-27 19:19:48.430739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.430887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.430902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.855 [2024-11-27 19:19:48.430911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:38.855 [2024-11-27 19:19:48.430927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.466851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.466902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.855 [2024-11-27 19:19:48.466916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.886 ms 00:21:38.855 [2024-11-27 19:19:48.466926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.466969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.466982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.855 [2024-11-27 19:19:48.466991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:38.855 [2024-11-27 19:19:48.467010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.467624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.467669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.855 [2024-11-27 19:19:48.467681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:38.855 [2024-11-27 19:19:48.467692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.467813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.467828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.855 [2024-11-27 19:19:48.467837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:38.855 [2024-11-27 19:19:48.467851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.855 [2024-11-27 19:19:48.485737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.855 [2024-11-27 19:19:48.485794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.855 [2024-11-27 19:19:48.485805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:21:38.855 [2024-11-27 19:19:48.485815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.508518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:39.117 [2024-11-27 19:19:48.512484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.512538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:39.117 [2024-11-27 19:19:48.512553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.574 ms 00:21:39.117 [2024-11-27 19:19:48.512562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.619899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.619961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:39.117 [2024-11-27 19:19:48.619979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.284 ms 00:21:39.117 [2024-11-27 19:19:48.619989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.620227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.620240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:39.117 [2024-11-27 19:19:48.620257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:21:39.117 [2024-11-27 19:19:48.620265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.647612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.647671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:39.117 [2024-11-27 19:19:48.647687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.281 ms 00:21:39.117 [2024-11-27 19:19:48.647696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.673626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.673677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:39.117 [2024-11-27 19:19:48.673693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.864 ms 00:21:39.117 [2024-11-27 19:19:48.673700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.117 [2024-11-27 19:19:48.674362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.117 [2024-11-27 19:19:48.674386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:39.117 [2024-11-27 19:19:48.674402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:21:39.117 [2024-11-27 19:19:48.674411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.378 [2024-11-27 19:19:48.768152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.378 [2024-11-27 19:19:48.768207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:39.378 [2024-11-27 19:19:48.768227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.659 ms 00:21:39.378 [2024-11-27 19:19:48.768236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.378 [2024-11-27 19:19:48.796897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.378 [2024-11-27 19:19:48.796949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:39.378 [2024-11-27 19:19:48.796965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.555 ms 00:21:39.378 [2024-11-27 19:19:48.796974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.378 [2024-11-27 19:19:48.823915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.378 [2024-11-27 19:19:48.823965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:39.378 [2024-11-27 19:19:48.823979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.883 ms 00:21:39.378 [2024-11-27 19:19:48.823987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.378 [2024-11-27 19:19:48.851271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.378 [2024-11-27 19:19:48.851326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:39.378 [2024-11-27 19:19:48.851341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.225 ms 00:21:39.378 [2024-11-27 19:19:48.851348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.378 [2024-11-27 19:19:48.851408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.378 [2024-11-27 19:19:48.851418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:39.379 [2024-11-27 19:19:48.851434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:39.379 [2024-11-27 19:19:48.851442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.379 [2024-11-27 19:19:48.851556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.379 [2024-11-27 19:19:48.851571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:39.379 [2024-11-27 19:19:48.851582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:39.379 [2024-11-27 19:19:48.851590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.379 [2024-11-27 19:19:48.852791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4824.450 ms, result 0 00:21:39.379 { 00:21:39.379 "name": "ftl0", 00:21:39.379 "uuid": "a739b9e0-5d65-489d-af21-aa598e8a13af" 00:21:39.379 } 00:21:39.379 19:19:48 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:39.379 19:19:48 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:39.640 19:19:49 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:39.640 19:19:49 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:39.902 [2024-11-27 19:19:49.296112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.296202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:39.902 [2024-11-27 19:19:49.296217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:39.902 [2024-11-27 19:19:49.296228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.296255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.902 [2024-11-27 19:19:49.299328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.299378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:39.902 [2024-11-27 19:19:49.299393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:21:39.902 [2024-11-27 19:19:49.299401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.299687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.299698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:39.902 [2024-11-27 19:19:49.299711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:21:39.902 [2024-11-27 19:19:49.299718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.302966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.302989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:39.902 [2024-11-27 19:19:49.303002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:21:39.902 [2024-11-27 19:19:49.303010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.309160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.309202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:39.902 [2024-11-27 19:19:49.309216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:21:39.902 [2024-11-27 19:19:49.309224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.336100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.336162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:39.902 [2024-11-27 19:19:49.336178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.783 ms 00:21:39.902 [2024-11-27 19:19:49.336186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.354982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.355038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:39.902 [2024-11-27 19:19:49.355054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.730 ms 00:21:39.902 [2024-11-27 19:19:49.355062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.355260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.355274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:39.902 [2024-11-27 19:19:49.355287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:21:39.902 [2024-11-27 19:19:49.355294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.381431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.381480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:39.902 [2024-11-27 19:19:49.381495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.109 ms 00:21:39.902 [2024-11-27 19:19:49.381502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.407189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.407243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:39.902 [2024-11-27 19:19:49.407258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.627 ms 00:21:39.902 [2024-11-27 19:19:49.407265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.432791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.432844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:39.902 [2024-11-27 19:19:49.432858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.465 ms 00:21:39.902 [2024-11-27 19:19:49.432865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.458177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.902 [2024-11-27 19:19:49.458216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:39.902 [2024-11-27 19:19:49.458230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.204 ms 00:21:39.902 [2024-11-27 19:19:49.458237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.902 [2024-11-27 19:19:49.458291] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:39.902 [2024-11-27 19:19:49.458307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:39.902 [2024-11-27 19:19:49.458472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.458994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:39.903 [2024-11-27 19:19:49.459260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:39.904 [2024-11-27 19:19:49.459270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:39.904 [2024-11-27 19:19:49.459287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:39.904 [2024-11-27 19:19:49.459298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a739b9e0-5d65-489d-af21-aa598e8a13af 00:21:39.904 [2024-11-27 19:19:49.459307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:39.904 [2024-11-27 19:19:49.459319] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:39.904 [2024-11-27 19:19:49.459330] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:39.904 [2024-11-27 19:19:49.459341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:39.904 [2024-11-27 19:19:49.459349] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:39.904 [2024-11-27 19:19:49.459359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:39.904 [2024-11-27 19:19:49.459367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:39.904 [2024-11-27 19:19:49.459376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:39.904 [2024-11-27 19:19:49.459383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:39.904 [2024-11-27 19:19:49.459393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.904 [2024-11-27 19:19:49.459401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:39.904 [2024-11-27 19:19:49.459412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:21:39.904 [2024-11-27 19:19:49.459422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.473540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.904 [2024-11-27 19:19:49.473589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:39.904 [2024-11-27 19:19:49.473602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.068 ms 00:21:39.904 [2024-11-27 19:19:49.473611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.474029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.904 [2024-11-27 19:19:49.474051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:39.904 [2024-11-27 19:19:49.474065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:21:39.904 [2024-11-27 19:19:49.474073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.520546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.904 [2024-11-27 19:19:49.520601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.904 [2024-11-27 19:19:49.520616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.904 [2024-11-27 19:19:49.520625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.520697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.904 [2024-11-27 19:19:49.520706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.904 [2024-11-27 19:19:49.520719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.904 [2024-11-27 19:19:49.520728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.520829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.904 [2024-11-27 19:19:49.520841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.904 [2024-11-27 19:19:49.520851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.904 [2024-11-27 19:19:49.520859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.904 [2024-11-27 19:19:49.520883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:39.904 [2024-11-27 19:19:49.520893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.904 [2024-11-27 19:19:49.520903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:39.904 [2024-11-27 19:19:49.520914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.606032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.606092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.164 [2024-11-27 19:19:49.606110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.606119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.676486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.676545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.164 [2024-11-27 19:19:49.676564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.676572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.676672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.676683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.164 [2024-11-27 19:19:49.676695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.676704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.676776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.676786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.164 [2024-11-27 19:19:49.676797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.676805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.676919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.676930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.164 [2024-11-27 19:19:49.676941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.676949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.676991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.677001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:40.164 [2024-11-27 19:19:49.677011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.677019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.677068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.677089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.164 [2024-11-27 19:19:49.677100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.677108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.677180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.164 [2024-11-27 19:19:49.677193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.164 [2024-11-27 19:19:49.677203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.164 [2024-11-27 19:19:49.677211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.164 [2024-11-27 19:19:49.677369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.213 ms, result 0 00:21:40.164 true 00:21:40.164 19:19:49 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77339 00:21:40.164 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77339 ']' 00:21:40.164 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77339 00:21:40.164 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77339 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.165 killing process with pid 77339 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77339' 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77339 00:21:40.165 19:19:49 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77339 00:21:46.753 19:19:56 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:50.960 262144+0 records in 00:21:50.960 262144+0 records out 00:21:50.960 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85213 s, 279 MB/s 00:21:50.960 19:19:59 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:52.347 19:20:01 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:52.347 [2024-11-27 19:20:01.605958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:52.347 [2024-11-27 19:20:01.606079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77575 ] 00:21:52.347 [2024-11-27 19:20:01.765914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.347 [2024-11-27 19:20:01.882446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.607 [2024-11-27 19:20:02.177464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.607 [2024-11-27 19:20:02.177547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.870 [2024-11-27 19:20:02.339010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.870 [2024-11-27 19:20:02.339075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:52.870 [2024-11-27 19:20:02.339090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:52.870 [2024-11-27 19:20:02.339099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.870 [2024-11-27 19:20:02.339180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.870 [2024-11-27 19:20:02.339194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.870 [2024-11-27 19:20:02.339204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:52.870 [2024-11-27 19:20:02.339212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.870 [2024-11-27 19:20:02.339235] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:52.870 [2024-11-27 19:20:02.340080] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:52.870 [2024-11-27 19:20:02.340146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.870 [2024-11-27 19:20:02.340156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.870 [2024-11-27 19:20:02.340166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:21:52.870 [2024-11-27 19:20:02.340174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.341994] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:52.871 [2024-11-27 19:20:02.355924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.355974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:52.871 [2024-11-27 19:20:02.355987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.932 ms 00:21:52.871 [2024-11-27 19:20:02.355995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.357605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.357644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:52.871 [2024-11-27 19:20:02.357655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:52.871 [2024-11-27 19:20:02.357663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.365578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.365619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.871 [2024-11-27 19:20:02.365630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.837 ms 00:21:52.871 [2024-11-27 19:20:02.365645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.365723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.365732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.871 [2024-11-27 19:20:02.365740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:52.871 [2024-11-27 19:20:02.365748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.365791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.365802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:52.871 [2024-11-27 19:20:02.365811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:52.871 [2024-11-27 19:20:02.365818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.365846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:52.871 [2024-11-27 19:20:02.369826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.369868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.871 [2024-11-27 19:20:02.369882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.988 ms 00:21:52.871 [2024-11-27 19:20:02.369891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.369927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.369936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:52.871 [2024-11-27 19:20:02.369945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:52.871 [2024-11-27 19:20:02.369953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.370004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:52.871 [2024-11-27 19:20:02.370027] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:52.871 [2024-11-27 19:20:02.370065] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:52.871 [2024-11-27 19:20:02.370084] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:52.871 [2024-11-27 19:20:02.370205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:52.871 [2024-11-27 19:20:02.370217] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:52.871 [2024-11-27 19:20:02.370229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:52.871 [2024-11-27 19:20:02.370240] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370250] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370259] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:52.871 [2024-11-27 19:20:02.370267] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:52.871 [2024-11-27 19:20:02.370278] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:52.871 [2024-11-27 19:20:02.370287] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:52.871 [2024-11-27 19:20:02.370295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.370303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:52.871 [2024-11-27 19:20:02.370311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:52.871 [2024-11-27 19:20:02.370318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.370402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.871 [2024-11-27 19:20:02.370419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:52.871 [2024-11-27 19:20:02.370427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:52.871 [2024-11-27 19:20:02.370434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.871 [2024-11-27 19:20:02.370544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:52.871 [2024-11-27 19:20:02.370554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:52.871 [2024-11-27 19:20:02.370563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:52.871 [2024-11-27 19:20:02.370586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:52.871 [2024-11-27 19:20:02.370607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.871 [2024-11-27 19:20:02.370621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:52.871 [2024-11-27 19:20:02.370628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:52.871 [2024-11-27 19:20:02.370635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.871 [2024-11-27 19:20:02.370652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:52.871 [2024-11-27 19:20:02.370659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:52.871 [2024-11-27 19:20:02.370667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:52.871 [2024-11-27 19:20:02.370681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:52.871 [2024-11-27 19:20:02.370703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:52.871 [2024-11-27 19:20:02.370749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:52.871 [2024-11-27 19:20:02.370771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:52.871 [2024-11-27 19:20:02.370793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:52.871 [2024-11-27 19:20:02.370799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.871 [2024-11-27 19:20:02.370806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:52.871 [2024-11-27 19:20:02.370813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:52.872 [2024-11-27 19:20:02.370820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.872 [2024-11-27 19:20:02.370827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:52.872 [2024-11-27 19:20:02.370834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:52.872 [2024-11-27 19:20:02.370841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.872 [2024-11-27 19:20:02.370848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:52.872 [2024-11-27 19:20:02.370855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:52.872 [2024-11-27 19:20:02.370862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.872 [2024-11-27 19:20:02.370869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:52.872 [2024-11-27 19:20:02.370875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:52.872 [2024-11-27 19:20:02.370882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.872 [2024-11-27 19:20:02.370889] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:52.872 [2024-11-27 19:20:02.370897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:52.872 [2024-11-27 19:20:02.370906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.872 [2024-11-27 19:20:02.370914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.872 [2024-11-27 19:20:02.370921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:52.872 [2024-11-27 19:20:02.370928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:52.872 [2024-11-27 19:20:02.370936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:52.872 [2024-11-27 19:20:02.370943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:52.872 [2024-11-27 19:20:02.370950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:52.872 [2024-11-27 19:20:02.370957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:52.872 [2024-11-27 19:20:02.370966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:52.872 [2024-11-27 19:20:02.370976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.370988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:52.872 [2024-11-27 19:20:02.370996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:52.872 [2024-11-27 19:20:02.371002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:52.872 [2024-11-27 19:20:02.371009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:52.872 [2024-11-27 19:20:02.371017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:52.872 [2024-11-27 19:20:02.371025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:52.872 [2024-11-27 19:20:02.371032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:52.872 [2024-11-27 19:20:02.371039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:52.872 [2024-11-27 19:20:02.371060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:52.872 [2024-11-27 19:20:02.371067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:52.872 [2024-11-27 19:20:02.371103] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:52.872 [2024-11-27 19:20:02.371111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:52.872 [2024-11-27 19:20:02.371150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:52.872 [2024-11-27 19:20:02.371157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:52.872 [2024-11-27 19:20:02.371165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:52.872 [2024-11-27 19:20:02.371172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.371180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:52.872 [2024-11-27 19:20:02.371189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:21:52.872 [2024-11-27 19:20:02.371196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.402819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.402866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.872 [2024-11-27 19:20:02.402880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.576 ms 00:21:52.872 [2024-11-27 19:20:02.402893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.402988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.402996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.872 [2024-11-27 19:20:02.403005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:52.872 [2024-11-27 19:20:02.403013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.451005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.451058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.872 [2024-11-27 19:20:02.451072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.932 ms 00:21:52.872 [2024-11-27 19:20:02.451080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.451143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.451154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.872 [2024-11-27 19:20:02.451167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.872 [2024-11-27 19:20:02.451175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.451776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.451812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.872 [2024-11-27 19:20:02.451824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:21:52.872 [2024-11-27 19:20:02.451832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.451989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.452000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.872 [2024-11-27 19:20:02.452014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:52.872 [2024-11-27 19:20:02.452022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.467589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.467637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.872 [2024-11-27 19:20:02.467648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.547 ms 00:21:52.872 [2024-11-27 19:20:02.467656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.872 [2024-11-27 19:20:02.481967] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:52.872 [2024-11-27 19:20:02.482017] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:52.872 [2024-11-27 19:20:02.482032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.872 [2024-11-27 19:20:02.482040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:52.872 [2024-11-27 19:20:02.482049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.268 ms 00:21:52.873 [2024-11-27 19:20:02.482056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.507776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.507831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:53.135 [2024-11-27 19:20:02.507843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.666 ms 00:21:53.135 [2024-11-27 19:20:02.507852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.520927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.520974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:53.135 [2024-11-27 19:20:02.520986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:21:53.135 [2024-11-27 19:20:02.520993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.533660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.533705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:53.135 [2024-11-27 19:20:02.533717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:21:53.135 [2024-11-27 19:20:02.533724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.534376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.534409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.135 [2024-11-27 19:20:02.534419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:21:53.135 [2024-11-27 19:20:02.534430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.602106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.602185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:53.135 [2024-11-27 19:20:02.602201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.656 ms 00:21:53.135 [2024-11-27 19:20:02.602216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.613138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:53.135 [2024-11-27 19:20:02.616226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.616266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:53.135 [2024-11-27 19:20:02.616279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.956 ms 00:21:53.135 [2024-11-27 19:20:02.616289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.616371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.616382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:53.135 [2024-11-27 19:20:02.616391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:53.135 [2024-11-27 19:20:02.616400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.616472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.616483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:53.135 [2024-11-27 19:20:02.616493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:53.135 [2024-11-27 19:20:02.616501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.616520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.616529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:53.135 [2024-11-27 19:20:02.616538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:53.135 [2024-11-27 19:20:02.616545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.616581] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:53.135 [2024-11-27 19:20:02.616594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.616603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:53.135 [2024-11-27 19:20:02.616611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:53.135 [2024-11-27 19:20:02.616619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.642320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.642366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:53.135 [2024-11-27 19:20:02.642379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.683 ms 00:21:53.135 [2024-11-27 19:20:02.642393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.135 [2024-11-27 19:20:02.642480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.135 [2024-11-27 19:20:02.642491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:53.135 [2024-11-27 19:20:02.642500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:53.135 [2024-11-27 19:20:02.642508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.136 [2024-11-27 19:20:02.644605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.082 ms, result 0 00:21:54.080  [2024-11-27T19:20:04.660Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T19:20:06.049Z] Copying: 35/1024 [MB] (13 MBps) [2024-11-27T19:20:06.990Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-27T19:20:07.931Z] Copying: 73/1024 [MB] (21 MBps) [2024-11-27T19:20:08.872Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-27T19:20:09.817Z] Copying: 127/1024 [MB] (29 MBps) [2024-11-27T19:20:10.761Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-27T19:20:11.705Z] Copying: 157/1024 [MB] (13 MBps) [2024-11-27T19:20:13.093Z] Copying: 178/1024 [MB] (20 MBps) [2024-11-27T19:20:13.664Z] Copying: 196/1024 [MB] (18 MBps) [2024-11-27T19:20:15.099Z] Copying: 233/1024 [MB] (36 MBps) [2024-11-27T19:20:15.672Z] Copying: 247/1024 [MB] (14 MBps) [2024-11-27T19:20:17.062Z] Copying: 263/1024 [MB] (15 MBps) [2024-11-27T19:20:18.008Z] Copying: 280/1024 [MB] (16 MBps) [2024-11-27T19:20:18.954Z] Copying: 295/1024 [MB] (14 MBps) [2024-11-27T19:20:19.897Z] Copying: 308/1024 [MB] (13 MBps) [2024-11-27T19:20:20.842Z] Copying: 335/1024 [MB] (26 MBps) [2024-11-27T19:20:21.787Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-27T19:20:22.733Z] Copying: 360/1024 [MB] (14 MBps) [2024-11-27T19:20:23.681Z] Copying: 373/1024 [MB] (12 MBps) [2024-11-27T19:20:25.071Z] Copying: 385/1024 [MB] (11 MBps) [2024-11-27T19:20:26.017Z] Copying: 401/1024 [MB] (15 MBps) [2024-11-27T19:20:26.963Z] Copying: 413/1024 [MB] (12 MBps) [2024-11-27T19:20:27.912Z] Copying: 427/1024 [MB] (14 MBps) [2024-11-27T19:20:28.857Z] Copying: 442/1024 [MB] (14 MBps) [2024-11-27T19:20:29.801Z] Copying: 458/1024 [MB] (15 MBps) [2024-11-27T19:20:30.745Z] Copying: 471/1024 [MB] (12 MBps) [2024-11-27T19:20:31.690Z] Copying: 483/1024 [MB] (12 MBps) [2024-11-27T19:20:33.079Z] Copying: 497/1024 [MB] (13 MBps) [2024-11-27T19:20:34.022Z] Copying: 511/1024 [MB] (13 MBps) [2024-11-27T19:20:34.965Z] Copying: 532/1024 [MB] (21 MBps) [2024-11-27T19:20:35.909Z] Copying: 543/1024 [MB] (11 MBps) [2024-11-27T19:20:36.854Z] Copying: 559/1024 [MB] (15 MBps) [2024-11-27T19:20:37.798Z] Copying: 580/1024 [MB] (21 MBps) [2024-11-27T19:20:38.743Z] Copying: 597/1024 [MB] (16 MBps) [2024-11-27T19:20:39.688Z] Copying: 620/1024 [MB] (22 MBps) [2024-11-27T19:20:41.088Z] Copying: 636/1024 [MB] (16 MBps) [2024-11-27T19:20:41.662Z] Copying: 657/1024 [MB] (21 MBps) [2024-11-27T19:20:43.118Z] Copying: 677/1024 [MB] (19 MBps) [2024-11-27T19:20:43.710Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-27T19:20:45.098Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-27T19:20:45.670Z] Copying: 717/1024 [MB] (17 MBps) [2024-11-27T19:20:47.056Z] Copying: 736/1024 [MB] (19 MBps) [2024-11-27T19:20:48.001Z] Copying: 750/1024 [MB] (14 MBps) [2024-11-27T19:20:48.946Z] Copying: 764/1024 [MB] (13 MBps) [2024-11-27T19:20:49.890Z] Copying: 778/1024 [MB] (13 MBps) [2024-11-27T19:20:50.835Z] Copying: 807/1024 [MB] (28 MBps) [2024-11-27T19:20:51.778Z] Copying: 825/1024 [MB] (17 MBps) [2024-11-27T19:20:52.720Z] Copying: 855096/1048576 [kB] (10240 kBps) [2024-11-27T19:20:53.663Z] Copying: 860/1024 [MB] (25 MBps) [2024-11-27T19:20:55.048Z] Copying: 879/1024 [MB] (18 MBps) [2024-11-27T19:20:56.078Z] Copying: 896/1024 [MB] (17 MBps) [2024-11-27T19:20:57.022Z] Copying: 909/1024 [MB] (13 MBps) [2024-11-27T19:20:57.967Z] Copying: 926/1024 [MB] (16 MBps) [2024-11-27T19:20:58.911Z] Copying: 939/1024 [MB] (13 MBps) [2024-11-27T19:20:59.856Z] Copying: 954/1024 [MB] (14 MBps) [2024-11-27T19:21:00.801Z] Copying: 967/1024 [MB] (12 MBps) [2024-11-27T19:21:01.748Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-27T19:21:02.693Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-27T19:21:04.079Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-27T19:21:04.079Z] Copying: 1016/1024 [MB] (15 MBps) [2024-11-27T19:21:04.079Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 19:21:03.980968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:03.981003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.444 [2024-11-27 19:21:03.981014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:54.444 [2024-11-27 19:21:03.981021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:03.981037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.444 [2024-11-27 19:21:03.983172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:03.983199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.444 [2024-11-27 19:21:03.983211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:22:54.444 [2024-11-27 19:21:03.983217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:03.984713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:03.984740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.444 [2024-11-27 19:21:03.984748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:22:54.444 [2024-11-27 19:21:03.984754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:03.996964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:03.996992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.444 [2024-11-27 19:21:03.997000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.198 ms 00:22:54.444 [2024-11-27 19:21:03.997006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.001879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.001902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.444 [2024-11-27 19:21:04.001910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:22:54.444 [2024-11-27 19:21:04.001915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.019993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.020020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.444 [2024-11-27 19:21:04.020028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.040 ms 00:22:54.444 [2024-11-27 19:21:04.020034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.031781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.031809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.444 [2024-11-27 19:21:04.031818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.721 ms 00:22:54.444 [2024-11-27 19:21:04.031825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.031908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.031919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.444 [2024-11-27 19:21:04.031925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:54.444 [2024-11-27 19:21:04.031931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.049790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.049817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.444 [2024-11-27 19:21:04.049825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.849 ms 00:22:54.444 [2024-11-27 19:21:04.049830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.444 [2024-11-27 19:21:04.067095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.444 [2024-11-27 19:21:04.067120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.445 [2024-11-27 19:21:04.067137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.241 ms 00:22:54.445 [2024-11-27 19:21:04.067143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.707 [2024-11-27 19:21:04.084176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.707 [2024-11-27 19:21:04.084203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.707 [2024-11-27 19:21:04.084210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.009 ms 00:22:54.707 [2024-11-27 19:21:04.084215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.707 [2024-11-27 19:21:04.101608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.707 [2024-11-27 19:21:04.101634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.707 [2024-11-27 19:21:04.101642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.351 ms 00:22:54.707 [2024-11-27 19:21:04.101647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.707 [2024-11-27 19:21:04.101670] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.707 [2024-11-27 19:21:04.101681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.707 [2024-11-27 19:21:04.101771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.101996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.708 [2024-11-27 19:21:04.102264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.708 [2024-11-27 19:21:04.102271] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a739b9e0-5d65-489d-af21-aa598e8a13af 00:22:54.708 [2024-11-27 19:21:04.102277] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:54.708 [2024-11-27 19:21:04.102282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:54.708 [2024-11-27 19:21:04.102287] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:54.708 [2024-11-27 19:21:04.102293] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:54.708 [2024-11-27 19:21:04.102298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.709 [2024-11-27 19:21:04.102310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.709 [2024-11-27 19:21:04.102315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.709 [2024-11-27 19:21:04.102321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.709 [2024-11-27 19:21:04.102326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.709 [2024-11-27 19:21:04.102331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.709 [2024-11-27 19:21:04.102337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.709 [2024-11-27 19:21:04.102343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:22:54.709 [2024-11-27 19:21:04.102349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.111678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.709 [2024-11-27 19:21:04.111702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.709 [2024-11-27 19:21:04.111710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.315 ms 00:22:54.709 [2024-11-27 19:21:04.111717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.111984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.709 [2024-11-27 19:21:04.111996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.709 [2024-11-27 19:21:04.112003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:22:54.709 [2024-11-27 19:21:04.112013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.137659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.137687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.709 [2024-11-27 19:21:04.137694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.137701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.137738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.137744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.709 [2024-11-27 19:21:04.137750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.137758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.137795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.137803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.709 [2024-11-27 19:21:04.137809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.137814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.137826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.137833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.709 [2024-11-27 19:21:04.137838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.137844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.196417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.196454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.709 [2024-11-27 19:21:04.196462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.196468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.709 [2024-11-27 19:21:04.244506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.709 [2024-11-27 19:21:04.244581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.709 [2024-11-27 19:21:04.244624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.709 [2024-11-27 19:21:04.244714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.709 [2024-11-27 19:21:04.244754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.709 [2024-11-27 19:21:04.244803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.709 [2024-11-27 19:21:04.244846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.709 [2024-11-27 19:21:04.244852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.709 [2024-11-27 19:21:04.244858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.709 [2024-11-27 19:21:04.244945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.955 ms, result 0 00:22:55.281 00:22:55.281 00:22:55.281 19:21:04 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:55.542 [2024-11-27 19:21:04.926169] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:55.542 [2024-11-27 19:21:04.926278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78229 ] 00:22:55.542 [2024-11-27 19:21:05.079976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.542 [2024-11-27 19:21:05.160468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.804 [2024-11-27 19:21:05.368993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.804 [2024-11-27 19:21:05.369048] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.066 [2024-11-27 19:21:05.520485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.520524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:56.066 [2024-11-27 19:21:05.520534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.066 [2024-11-27 19:21:05.520541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.520573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.520583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.066 [2024-11-27 19:21:05.520589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:56.066 [2024-11-27 19:21:05.520595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.520607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:56.066 [2024-11-27 19:21:05.521116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:56.066 [2024-11-27 19:21:05.521142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.521148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.066 [2024-11-27 19:21:05.521155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:22:56.066 [2024-11-27 19:21:05.521160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.522066] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:56.066 [2024-11-27 19:21:05.531658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.531688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:56.066 [2024-11-27 19:21:05.531696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:22:56.066 [2024-11-27 19:21:05.531703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.531746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.531753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:56.066 [2024-11-27 19:21:05.531759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:56.066 [2024-11-27 19:21:05.531765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.536065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.536090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.066 [2024-11-27 19:21:05.536097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:22:56.066 [2024-11-27 19:21:05.536106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.536173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.536181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.066 [2024-11-27 19:21:05.536187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:56.066 [2024-11-27 19:21:05.536193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.536223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.066 [2024-11-27 19:21:05.536230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.066 [2024-11-27 19:21:05.536236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.066 [2024-11-27 19:21:05.536241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.066 [2024-11-27 19:21:05.536259] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.067 [2024-11-27 19:21:05.538818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.067 [2024-11-27 19:21:05.538843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.067 [2024-11-27 19:21:05.538852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:22:56.067 [2024-11-27 19:21:05.538857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.067 [2024-11-27 19:21:05.538880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.067 [2024-11-27 19:21:05.538887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.067 [2024-11-27 19:21:05.538893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:56.067 [2024-11-27 19:21:05.538898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.067 [2024-11-27 19:21:05.538913] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:56.067 [2024-11-27 19:21:05.538926] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:56.067 [2024-11-27 19:21:05.538952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:56.067 [2024-11-27 19:21:05.538965] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:56.067 [2024-11-27 19:21:05.539042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.067 [2024-11-27 19:21:05.539050] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.067 [2024-11-27 19:21:05.539058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:56.067 [2024-11-27 19:21:05.539066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539079] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:56.067 [2024-11-27 19:21:05.539084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.067 [2024-11-27 19:21:05.539092] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.067 [2024-11-27 19:21:05.539098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.067 [2024-11-27 19:21:05.539103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.067 [2024-11-27 19:21:05.539109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.067 [2024-11-27 19:21:05.539115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:22:56.067 [2024-11-27 19:21:05.539120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.067 [2024-11-27 19:21:05.539192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.067 [2024-11-27 19:21:05.539199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.067 [2024-11-27 19:21:05.539204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:56.067 [2024-11-27 19:21:05.539209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.067 [2024-11-27 19:21:05.539287] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.067 [2024-11-27 19:21:05.539294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.067 [2024-11-27 19:21:05.539300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.067 [2024-11-27 19:21:05.539317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.067 [2024-11-27 19:21:05.539333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.067 [2024-11-27 19:21:05.539343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.067 [2024-11-27 19:21:05.539348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:56.067 [2024-11-27 19:21:05.539352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.067 [2024-11-27 19:21:05.539362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.067 [2024-11-27 19:21:05.539368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:56.067 [2024-11-27 19:21:05.539373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.067 [2024-11-27 19:21:05.539383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.067 [2024-11-27 19:21:05.539398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.067 [2024-11-27 19:21:05.539412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.067 [2024-11-27 19:21:05.539427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.067 [2024-11-27 19:21:05.539441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.067 [2024-11-27 19:21:05.539455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.067 [2024-11-27 19:21:05.539464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.067 [2024-11-27 19:21:05.539469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:56.067 [2024-11-27 19:21:05.539474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.067 [2024-11-27 19:21:05.539478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.067 [2024-11-27 19:21:05.539483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:56.067 [2024-11-27 19:21:05.539488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.067 [2024-11-27 19:21:05.539497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:56.067 [2024-11-27 19:21:05.539502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.067 [2024-11-27 19:21:05.539512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.067 [2024-11-27 19:21:05.539517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.067 [2024-11-27 19:21:05.539531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.067 [2024-11-27 19:21:05.539536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.067 [2024-11-27 19:21:05.539540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.067 [2024-11-27 19:21:05.539546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.067 [2024-11-27 19:21:05.539550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.067 [2024-11-27 19:21:05.539555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.067 [2024-11-27 19:21:05.539561] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.067 [2024-11-27 19:21:05.539568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.067 [2024-11-27 19:21:05.539576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:56.067 [2024-11-27 19:21:05.539581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:56.067 [2024-11-27 19:21:05.539587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:56.067 [2024-11-27 19:21:05.539593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:56.067 [2024-11-27 19:21:05.539598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:56.067 [2024-11-27 19:21:05.539603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:56.067 [2024-11-27 19:21:05.539608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:56.067 [2024-11-27 19:21:05.539613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:56.067 [2024-11-27 19:21:05.539618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:56.067 [2024-11-27 19:21:05.539623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:56.067 [2024-11-27 19:21:05.539629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:56.067 [2024-11-27 19:21:05.539634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:56.067 [2024-11-27 19:21:05.539639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:56.068 [2024-11-27 19:21:05.539645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:56.068 [2024-11-27 19:21:05.539650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.068 [2024-11-27 19:21:05.539656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.068 [2024-11-27 19:21:05.539662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.068 [2024-11-27 19:21:05.539668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.068 [2024-11-27 19:21:05.539673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.068 [2024-11-27 19:21:05.539678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.068 [2024-11-27 19:21:05.539684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.539689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.068 [2024-11-27 19:21:05.539695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:22:56.068 [2024-11-27 19:21:05.539703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.560308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.560336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.068 [2024-11-27 19:21:05.560346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.574 ms 00:22:56.068 [2024-11-27 19:21:05.560351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.560410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.560417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:56.068 [2024-11-27 19:21:05.560423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:56.068 [2024-11-27 19:21:05.560430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.601056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.601088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.068 [2024-11-27 19:21:05.601098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.590 ms 00:22:56.068 [2024-11-27 19:21:05.601103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.601136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.601148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.068 [2024-11-27 19:21:05.601155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:56.068 [2024-11-27 19:21:05.601160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.601475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.601496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.068 [2024-11-27 19:21:05.601504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:22:56.068 [2024-11-27 19:21:05.601509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.601606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.601613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.068 [2024-11-27 19:21:05.601622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:56.068 [2024-11-27 19:21:05.601628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.612014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.612043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.068 [2024-11-27 19:21:05.612050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.371 ms 00:22:56.068 [2024-11-27 19:21:05.612056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.621879] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:56.068 [2024-11-27 19:21:05.621906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:56.068 [2024-11-27 19:21:05.621915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.621921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:56.068 [2024-11-27 19:21:05.621928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.774 ms 00:22:56.068 [2024-11-27 19:21:05.621934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.640445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.640472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:56.068 [2024-11-27 19:21:05.640481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.481 ms 00:22:56.068 [2024-11-27 19:21:05.640487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.649366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.649394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:56.068 [2024-11-27 19:21:05.649401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.844 ms 00:22:56.068 [2024-11-27 19:21:05.649406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.657973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.657998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:56.068 [2024-11-27 19:21:05.658005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.542 ms 00:22:56.068 [2024-11-27 19:21:05.658010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.068 [2024-11-27 19:21:05.658471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.068 [2024-11-27 19:21:05.658493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:56.068 [2024-11-27 19:21:05.658500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:22:56.068 [2024-11-27 19:21:05.658506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.702521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.702561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:56.330 [2024-11-27 19:21:05.702571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.002 ms 00:22:56.330 [2024-11-27 19:21:05.702577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.710301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:56.330 [2024-11-27 19:21:05.712103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.712136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:56.330 [2024-11-27 19:21:05.712145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.495 ms 00:22:56.330 [2024-11-27 19:21:05.712152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.712203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.712212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:56.330 [2024-11-27 19:21:05.712221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:56.330 [2024-11-27 19:21:05.712227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.712268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.712276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:56.330 [2024-11-27 19:21:05.712283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:56.330 [2024-11-27 19:21:05.712289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.712305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.712312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:56.330 [2024-11-27 19:21:05.712319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.330 [2024-11-27 19:21:05.712327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.712351] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:56.330 [2024-11-27 19:21:05.712359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.712366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:56.330 [2024-11-27 19:21:05.712373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:56.330 [2024-11-27 19:21:05.712379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.729813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.729840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:56.330 [2024-11-27 19:21:05.729851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.420 ms 00:22:56.330 [2024-11-27 19:21:05.729858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.729910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.330 [2024-11-27 19:21:05.729918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:56.330 [2024-11-27 19:21:05.729924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:56.330 [2024-11-27 19:21:05.729930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.330 [2024-11-27 19:21:05.730739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.930 ms, result 0 00:22:57.276  [2024-11-27T19:21:08.300Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T19:21:08.872Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-27T19:21:10.256Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-27T19:21:11.202Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-27T19:21:12.145Z] Copying: 54/1024 [MB] (10 MBps) [2024-11-27T19:21:13.091Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-27T19:21:14.035Z] Copying: 79/1024 [MB] (14 MBps) [2024-11-27T19:21:14.980Z] Copying: 99/1024 [MB] (20 MBps) [2024-11-27T19:21:15.926Z] Copying: 114/1024 [MB] (15 MBps) [2024-11-27T19:21:16.870Z] Copying: 138/1024 [MB] (24 MBps) [2024-11-27T19:21:18.259Z] Copying: 152/1024 [MB] (13 MBps) [2024-11-27T19:21:19.203Z] Copying: 172/1024 [MB] (20 MBps) [2024-11-27T19:21:20.146Z] Copying: 187/1024 [MB] (15 MBps) [2024-11-27T19:21:21.092Z] Copying: 211/1024 [MB] (23 MBps) [2024-11-27T19:21:22.038Z] Copying: 235/1024 [MB] (23 MBps) [2024-11-27T19:21:22.984Z] Copying: 254/1024 [MB] (18 MBps) [2024-11-27T19:21:23.931Z] Copying: 264/1024 [MB] (10 MBps) [2024-11-27T19:21:24.876Z] Copying: 279/1024 [MB] (14 MBps) [2024-11-27T19:21:26.300Z] Copying: 291/1024 [MB] (12 MBps) [2024-11-27T19:21:26.873Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-27T19:21:28.258Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-27T19:21:29.201Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-27T19:21:30.145Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-27T19:21:31.089Z] Copying: 348/1024 [MB] (14 MBps) [2024-11-27T19:21:32.035Z] Copying: 371/1024 [MB] (23 MBps) [2024-11-27T19:21:32.981Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-27T19:21:33.927Z] Copying: 399/1024 [MB] (17 MBps) [2024-11-27T19:21:34.871Z] Copying: 415/1024 [MB] (16 MBps) [2024-11-27T19:21:36.260Z] Copying: 432/1024 [MB] (17 MBps) [2024-11-27T19:21:37.205Z] Copying: 451/1024 [MB] (18 MBps) [2024-11-27T19:21:38.151Z] Copying: 466/1024 [MB] (15 MBps) [2024-11-27T19:21:39.096Z] Copying: 485/1024 [MB] (18 MBps) [2024-11-27T19:21:40.097Z] Copying: 504/1024 [MB] (19 MBps) [2024-11-27T19:21:41.067Z] Copying: 522/1024 [MB] (17 MBps) [2024-11-27T19:21:42.010Z] Copying: 535/1024 [MB] (13 MBps) [2024-11-27T19:21:42.955Z] Copying: 557/1024 [MB] (21 MBps) [2024-11-27T19:21:43.900Z] Copying: 576/1024 [MB] (19 MBps) [2024-11-27T19:21:45.289Z] Copying: 596/1024 [MB] (20 MBps) [2024-11-27T19:21:46.235Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-27T19:21:47.180Z] Copying: 630/1024 [MB] (18 MBps) [2024-11-27T19:21:48.126Z] Copying: 648/1024 [MB] (17 MBps) [2024-11-27T19:21:49.069Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-27T19:21:50.014Z] Copying: 681/1024 [MB] (22 MBps) [2024-11-27T19:21:50.958Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-27T19:21:51.903Z] Copying: 712/1024 [MB] (18 MBps) [2024-11-27T19:21:53.289Z] Copying: 731/1024 [MB] (18 MBps) [2024-11-27T19:21:54.235Z] Copying: 746/1024 [MB] (15 MBps) [2024-11-27T19:21:55.181Z] Copying: 768/1024 [MB] (21 MBps) [2024-11-27T19:21:56.126Z] Copying: 780/1024 [MB] (12 MBps) [2024-11-27T19:21:57.070Z] Copying: 801/1024 [MB] (21 MBps) [2024-11-27T19:21:58.014Z] Copying: 818/1024 [MB] (17 MBps) [2024-11-27T19:21:58.959Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-27T19:21:59.903Z] Copying: 846/1024 [MB] (16 MBps) [2024-11-27T19:22:01.292Z] Copying: 863/1024 [MB] (17 MBps) [2024-11-27T19:22:02.237Z] Copying: 878/1024 [MB] (15 MBps) [2024-11-27T19:22:03.180Z] Copying: 891/1024 [MB] (12 MBps) [2024-11-27T19:22:04.125Z] Copying: 912/1024 [MB] (20 MBps) [2024-11-27T19:22:05.070Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-27T19:22:06.013Z] Copying: 945/1024 [MB] (18 MBps) [2024-11-27T19:22:07.008Z] Copying: 961/1024 [MB] (15 MBps) [2024-11-27T19:22:07.952Z] Copying: 977/1024 [MB] (15 MBps) [2024-11-27T19:22:08.898Z] Copying: 990/1024 [MB] (12 MBps) [2024-11-27T19:22:10.288Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-27T19:22:10.288Z] Copying: 1022/1024 [MB] (21 MBps) [2024-11-27T19:22:10.288Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 19:22:10.034227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.034363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.653 [2024-11-27 19:22:10.034397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:00.653 [2024-11-27 19:22:10.034420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.034478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.653 [2024-11-27 19:22:10.037779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.037834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.653 [2024-11-27 19:22:10.037846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:24:00.653 [2024-11-27 19:22:10.037855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.038079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.038090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.653 [2024-11-27 19:22:10.038099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:24:00.653 [2024-11-27 19:22:10.038106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.042146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.042173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.653 [2024-11-27 19:22:10.042189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.025 ms 00:24:00.653 [2024-11-27 19:22:10.042197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.048768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.048814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:00.653 [2024-11-27 19:22:10.048825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.550 ms 00:24:00.653 [2024-11-27 19:22:10.048833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.075093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.075152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.653 [2024-11-27 19:22:10.075165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.198 ms 00:24:00.653 [2024-11-27 19:22:10.075173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.092019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.092067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.653 [2024-11-27 19:22:10.092080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.802 ms 00:24:00.653 [2024-11-27 19:22:10.092094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.092258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.092271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.653 [2024-11-27 19:22:10.092281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:00.653 [2024-11-27 19:22:10.092289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.118330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.118380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:00.653 [2024-11-27 19:22:10.118392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.025 ms 00:24:00.653 [2024-11-27 19:22:10.118400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.143672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.143717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:00.653 [2024-11-27 19:22:10.143729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.226 ms 00:24:00.653 [2024-11-27 19:22:10.143737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.168231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.168277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.653 [2024-11-27 19:22:10.168288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.450 ms 00:24:00.653 [2024-11-27 19:22:10.168296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.192343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.653 [2024-11-27 19:22:10.192384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.653 [2024-11-27 19:22:10.192396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.974 ms 00:24:00.653 [2024-11-27 19:22:10.192404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.653 [2024-11-27 19:22:10.192446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.653 [2024-11-27 19:22:10.192469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.653 [2024-11-27 19:22:10.192594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.192993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.654 [2024-11-27 19:22:10.193282] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.654 [2024-11-27 19:22:10.193290] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a739b9e0-5d65-489d-af21-aa598e8a13af 00:24:00.654 [2024-11-27 19:22:10.193298] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:00.654 [2024-11-27 19:22:10.193307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:00.654 [2024-11-27 19:22:10.193314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:00.655 [2024-11-27 19:22:10.193323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:00.655 [2024-11-27 19:22:10.193337] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.655 [2024-11-27 19:22:10.193344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.655 [2024-11-27 19:22:10.193352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.655 [2024-11-27 19:22:10.193358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.655 [2024-11-27 19:22:10.193365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.655 [2024-11-27 19:22:10.193372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.655 [2024-11-27 19:22:10.193380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.655 [2024-11-27 19:22:10.193392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:24:00.655 [2024-11-27 19:22:10.193400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.207136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.655 [2024-11-27 19:22:10.207178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.655 [2024-11-27 19:22:10.207190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.716 ms 00:24:00.655 [2024-11-27 19:22:10.207198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.207598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.655 [2024-11-27 19:22:10.207616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.655 [2024-11-27 19:22:10.207627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:24:00.655 [2024-11-27 19:22:10.207634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.243854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.655 [2024-11-27 19:22:10.243904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.655 [2024-11-27 19:22:10.243916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.655 [2024-11-27 19:22:10.243926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.243997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.655 [2024-11-27 19:22:10.244012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.655 [2024-11-27 19:22:10.244022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.655 [2024-11-27 19:22:10.244032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.244095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.655 [2024-11-27 19:22:10.244107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.655 [2024-11-27 19:22:10.244117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.655 [2024-11-27 19:22:10.244161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.655 [2024-11-27 19:22:10.244180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.655 [2024-11-27 19:22:10.244189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.655 [2024-11-27 19:22:10.244204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.655 [2024-11-27 19:22:10.244213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.329895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.329951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.916 [2024-11-27 19:22:10.329965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.329974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.399591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.399649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.916 [2024-11-27 19:22:10.399666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.399675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.399763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.399773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.916 [2024-11-27 19:22:10.399783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.399792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.399828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.399837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.916 [2024-11-27 19:22:10.399846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.399857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.399958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.399969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.916 [2024-11-27 19:22:10.399978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.399986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.400017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.400027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:00.916 [2024-11-27 19:22:10.400036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.400044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.400091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.400101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.916 [2024-11-27 19:22:10.400110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.400118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.400188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.916 [2024-11-27 19:22:10.400198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.916 [2024-11-27 19:22:10.400206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.916 [2024-11-27 19:22:10.400217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.916 [2024-11-27 19:22:10.400349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.119 ms, result 0 00:24:01.488 00:24:01.488 00:24:01.749 19:22:11 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:03.664 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:03.664 19:22:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:03.664 [2024-11-27 19:22:13.258445] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:03.664 [2024-11-27 19:22:13.258555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78925 ] 00:24:03.925 [2024-11-27 19:22:13.417405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.925 [2024-11-27 19:22:13.532956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.498 [2024-11-27 19:22:13.827169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.498 [2024-11-27 19:22:13.827248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.498 [2024-11-27 19:22:13.987782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:13.987840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:04.498 [2024-11-27 19:22:13.987856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:04.498 [2024-11-27 19:22:13.987865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:13.987919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:13.987932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.498 [2024-11-27 19:22:13.987941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:04.498 [2024-11-27 19:22:13.987949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:13.987971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:04.498 [2024-11-27 19:22:13.988831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:04.498 [2024-11-27 19:22:13.988859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:13.988869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.498 [2024-11-27 19:22:13.988879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:24:04.498 [2024-11-27 19:22:13.988887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:13.990636] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:04.498 [2024-11-27 19:22:14.004525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:14.004581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:04.498 [2024-11-27 19:22:14.004596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.892 ms 00:24:04.498 [2024-11-27 19:22:14.004604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:14.004684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:14.004694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:04.498 [2024-11-27 19:22:14.004703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:04.498 [2024-11-27 19:22:14.004710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:14.012699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:14.012735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.498 [2024-11-27 19:22:14.012746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.913 ms 00:24:04.498 [2024-11-27 19:22:14.012760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:14.012839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:14.012849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.498 [2024-11-27 19:22:14.012857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:04.498 [2024-11-27 19:22:14.012865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.498 [2024-11-27 19:22:14.012909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.498 [2024-11-27 19:22:14.012919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:04.499 [2024-11-27 19:22:14.012927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:04.499 [2024-11-27 19:22:14.012935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.499 [2024-11-27 19:22:14.012961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:04.499 [2024-11-27 19:22:14.016930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.499 [2024-11-27 19:22:14.016964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.499 [2024-11-27 19:22:14.016978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:24:04.499 [2024-11-27 19:22:14.016986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.499 [2024-11-27 19:22:14.017021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.499 [2024-11-27 19:22:14.017030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:04.499 [2024-11-27 19:22:14.017039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:04.499 [2024-11-27 19:22:14.017047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.499 [2024-11-27 19:22:14.017099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:04.499 [2024-11-27 19:22:14.017136] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:04.499 [2024-11-27 19:22:14.017174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:04.499 [2024-11-27 19:22:14.017193] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:04.499 [2024-11-27 19:22:14.017304] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:04.499 [2024-11-27 19:22:14.017320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:04.499 [2024-11-27 19:22:14.017336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:04.499 [2024-11-27 19:22:14.017351] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017374] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:04.499 [2024-11-27 19:22:14.017387] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:04.499 [2024-11-27 19:22:14.017402] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:04.499 [2024-11-27 19:22:14.017413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:04.499 [2024-11-27 19:22:14.017421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.499 [2024-11-27 19:22:14.017429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:04.499 [2024-11-27 19:22:14.017437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:24:04.499 [2024-11-27 19:22:14.017444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.499 [2024-11-27 19:22:14.017547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.499 [2024-11-27 19:22:14.017571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:04.499 [2024-11-27 19:22:14.017584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:04.499 [2024-11-27 19:22:14.017597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.499 [2024-11-27 19:22:14.017736] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:04.499 [2024-11-27 19:22:14.017755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:04.499 [2024-11-27 19:22:14.017770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:04.499 [2024-11-27 19:22:14.017806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:04.499 [2024-11-27 19:22:14.017838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.499 [2024-11-27 19:22:14.017863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:04.499 [2024-11-27 19:22:14.017876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:04.499 [2024-11-27 19:22:14.017887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.499 [2024-11-27 19:22:14.017908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:04.499 [2024-11-27 19:22:14.017919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:04.499 [2024-11-27 19:22:14.017930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:04.499 [2024-11-27 19:22:14.017952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:04.499 [2024-11-27 19:22:14.017976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:04.499 [2024-11-27 19:22:14.017984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.499 [2024-11-27 19:22:14.017991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:04.499 [2024-11-27 19:22:14.017997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.499 [2024-11-27 19:22:14.018011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:04.499 [2024-11-27 19:22:14.018017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.499 [2024-11-27 19:22:14.018031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:04.499 [2024-11-27 19:22:14.018038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.499 [2024-11-27 19:22:14.018051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:04.499 [2024-11-27 19:22:14.018058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.499 [2024-11-27 19:22:14.018072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:04.499 [2024-11-27 19:22:14.018078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:04.499 [2024-11-27 19:22:14.018084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.499 [2024-11-27 19:22:14.018091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:04.499 [2024-11-27 19:22:14.018098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:04.499 [2024-11-27 19:22:14.018104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:04.499 [2024-11-27 19:22:14.018117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:04.499 [2024-11-27 19:22:14.018141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:04.499 [2024-11-27 19:22:14.018159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:04.499 [2024-11-27 19:22:14.018168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.499 [2024-11-27 19:22:14.018175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.499 [2024-11-27 19:22:14.018183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:04.499 [2024-11-27 19:22:14.018191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:04.499 [2024-11-27 19:22:14.018199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:04.499 [2024-11-27 19:22:14.018206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:04.499 [2024-11-27 19:22:14.018213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:04.499 [2024-11-27 19:22:14.018220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:04.499 [2024-11-27 19:22:14.018229] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:04.499 [2024-11-27 19:22:14.018240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.499 [2024-11-27 19:22:14.018251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:04.499 [2024-11-27 19:22:14.018259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:04.499 [2024-11-27 19:22:14.018267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:04.499 [2024-11-27 19:22:14.018274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:04.499 [2024-11-27 19:22:14.018282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:04.499 [2024-11-27 19:22:14.018289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:04.499 [2024-11-27 19:22:14.018297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:04.499 [2024-11-27 19:22:14.018304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:04.499 [2024-11-27 19:22:14.018311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:04.499 [2024-11-27 19:22:14.018319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:04.499 [2024-11-27 19:22:14.018327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:04.499 [2024-11-27 19:22:14.018334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:04.500 [2024-11-27 19:22:14.018341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:04.500 [2024-11-27 19:22:14.018348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:04.500 [2024-11-27 19:22:14.018355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:04.500 [2024-11-27 19:22:14.018364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.500 [2024-11-27 19:22:14.018372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:04.500 [2024-11-27 19:22:14.018380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:04.500 [2024-11-27 19:22:14.018387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:04.500 [2024-11-27 19:22:14.018394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:04.500 [2024-11-27 19:22:14.018408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.018415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:04.500 [2024-11-27 19:22:14.018423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:24:04.500 [2024-11-27 19:22:14.018430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.049981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.050029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.500 [2024-11-27 19:22:14.050041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.499 ms 00:24:04.500 [2024-11-27 19:22:14.050054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.050169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.050180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.500 [2024-11-27 19:22:14.050189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:04.500 [2024-11-27 19:22:14.050197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.090627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.090676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.500 [2024-11-27 19:22:14.090689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.367 ms 00:24:04.500 [2024-11-27 19:22:14.090698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.090746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.090756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.500 [2024-11-27 19:22:14.090770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.500 [2024-11-27 19:22:14.090777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.091465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.091502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.500 [2024-11-27 19:22:14.091514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:24:04.500 [2024-11-27 19:22:14.091522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.091684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.091695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.500 [2024-11-27 19:22:14.091710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:04.500 [2024-11-27 19:22:14.091718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.107377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.107419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.500 [2024-11-27 19:22:14.107431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.638 ms 00:24:04.500 [2024-11-27 19:22:14.107439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.500 [2024-11-27 19:22:14.121684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:04.500 [2024-11-27 19:22:14.121731] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:04.500 [2024-11-27 19:22:14.121744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.500 [2024-11-27 19:22:14.121753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:04.500 [2024-11-27 19:22:14.121762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.200 ms 00:24:04.500 [2024-11-27 19:22:14.121769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.761 [2024-11-27 19:22:14.147865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.761 [2024-11-27 19:22:14.147905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:04.761 [2024-11-27 19:22:14.147918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 00:24:04.761 [2024-11-27 19:22:14.147928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.761 [2024-11-27 19:22:14.160746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.761 [2024-11-27 19:22:14.160790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:04.761 [2024-11-27 19:22:14.160801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.751 ms 00:24:04.761 [2024-11-27 19:22:14.160808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.761 [2024-11-27 19:22:14.173575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.761 [2024-11-27 19:22:14.173619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:04.761 [2024-11-27 19:22:14.173631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:24:04.761 [2024-11-27 19:22:14.173638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.761 [2024-11-27 19:22:14.174301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.174327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:04.762 [2024-11-27 19:22:14.174340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:24:04.762 [2024-11-27 19:22:14.174348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.242496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.242550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:04.762 [2024-11-27 19:22:14.242571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.129 ms 00:24:04.762 [2024-11-27 19:22:14.242580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.253977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:04.762 [2024-11-27 19:22:14.257329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.257369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.762 [2024-11-27 19:22:14.257380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.696 ms 00:24:04.762 [2024-11-27 19:22:14.257388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.257472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.257483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:04.762 [2024-11-27 19:22:14.257495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:04.762 [2024-11-27 19:22:14.257503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.257572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.257583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.762 [2024-11-27 19:22:14.257592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:04.762 [2024-11-27 19:22:14.257601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.257622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.257631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.762 [2024-11-27 19:22:14.257639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:04.762 [2024-11-27 19:22:14.257647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.257685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:04.762 [2024-11-27 19:22:14.257697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.257705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:04.762 [2024-11-27 19:22:14.257714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:04.762 [2024-11-27 19:22:14.257722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.283163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.283210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.762 [2024-11-27 19:22:14.283229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.422 ms 00:24:04.762 [2024-11-27 19:22:14.283238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.283325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.762 [2024-11-27 19:22:14.283335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.762 [2024-11-27 19:22:14.283345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:04.762 [2024-11-27 19:22:14.283354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.762 [2024-11-27 19:22:14.284760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.512 ms, result 0 00:24:05.707  [2024-11-27T19:22:16.728Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T19:22:17.300Z] Copying: 39/1024 [MB] (24 MBps) [2024-11-27T19:22:18.687Z] Copying: 50/1024 [MB] (10 MBps) [2024-11-27T19:22:19.631Z] Copying: 64/1024 [MB] (14 MBps) [2024-11-27T19:22:20.572Z] Copying: 93/1024 [MB] (29 MBps) [2024-11-27T19:22:21.517Z] Copying: 115/1024 [MB] (21 MBps) [2024-11-27T19:22:22.461Z] Copying: 136/1024 [MB] (20 MBps) [2024-11-27T19:22:23.407Z] Copying: 158/1024 [MB] (22 MBps) [2024-11-27T19:22:24.349Z] Copying: 181/1024 [MB] (22 MBps) [2024-11-27T19:22:25.737Z] Copying: 195/1024 [MB] (13 MBps) [2024-11-27T19:22:26.310Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-27T19:22:27.699Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-27T19:22:28.643Z] Copying: 233/1024 [MB] (17 MBps) [2024-11-27T19:22:29.587Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-27T19:22:30.531Z] Copying: 265/1024 [MB] (15 MBps) [2024-11-27T19:22:31.476Z] Copying: 278/1024 [MB] (13 MBps) [2024-11-27T19:22:32.421Z] Copying: 295492/1048576 [kB] (10124 kBps) [2024-11-27T19:22:33.367Z] Copying: 305/1024 [MB] (17 MBps) [2024-11-27T19:22:34.313Z] Copying: 321/1024 [MB] (15 MBps) [2024-11-27T19:22:35.704Z] Copying: 335/1024 [MB] (13 MBps) [2024-11-27T19:22:36.649Z] Copying: 352/1024 [MB] (17 MBps) [2024-11-27T19:22:37.593Z] Copying: 371/1024 [MB] (18 MBps) [2024-11-27T19:22:38.538Z] Copying: 389/1024 [MB] (17 MBps) [2024-11-27T19:22:39.512Z] Copying: 411/1024 [MB] (22 MBps) [2024-11-27T19:22:40.481Z] Copying: 426/1024 [MB] (14 MBps) [2024-11-27T19:22:41.423Z] Copying: 439/1024 [MB] (13 MBps) [2024-11-27T19:22:42.365Z] Copying: 456/1024 [MB] (17 MBps) [2024-11-27T19:22:43.307Z] Copying: 470/1024 [MB] (13 MBps) [2024-11-27T19:22:44.686Z] Copying: 491680/1048576 [kB] (10240 kBps) [2024-11-27T19:22:45.622Z] Copying: 494/1024 [MB] (14 MBps) [2024-11-27T19:22:46.565Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-27T19:22:47.507Z] Copying: 536/1024 [MB] (17 MBps) [2024-11-27T19:22:48.450Z] Copying: 551/1024 [MB] (14 MBps) [2024-11-27T19:22:49.392Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-27T19:22:50.336Z] Copying: 577/1024 [MB] (15 MBps) [2024-11-27T19:22:51.725Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-27T19:22:52.669Z] Copying: 601/1024 [MB] (12 MBps) [2024-11-27T19:22:53.612Z] Copying: 613/1024 [MB] (12 MBps) [2024-11-27T19:22:54.556Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-27T19:22:55.498Z] Copying: 645/1024 [MB] (21 MBps) [2024-11-27T19:22:56.443Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-27T19:22:57.387Z] Copying: 669/1024 [MB] (14 MBps) [2024-11-27T19:22:58.337Z] Copying: 684/1024 [MB] (14 MBps) [2024-11-27T19:22:59.725Z] Copying: 704/1024 [MB] (20 MBps) [2024-11-27T19:23:00.669Z] Copying: 724/1024 [MB] (19 MBps) [2024-11-27T19:23:01.613Z] Copying: 737/1024 [MB] (13 MBps) [2024-11-27T19:23:02.557Z] Copying: 750/1024 [MB] (12 MBps) [2024-11-27T19:23:03.513Z] Copying: 766/1024 [MB] (15 MBps) [2024-11-27T19:23:04.455Z] Copying: 781/1024 [MB] (15 MBps) [2024-11-27T19:23:05.400Z] Copying: 798/1024 [MB] (16 MBps) [2024-11-27T19:23:06.343Z] Copying: 820/1024 [MB] (22 MBps) [2024-11-27T19:23:07.730Z] Copying: 840/1024 [MB] (19 MBps) [2024-11-27T19:23:08.302Z] Copying: 858/1024 [MB] (18 MBps) [2024-11-27T19:23:09.688Z] Copying: 888/1024 [MB] (29 MBps) [2024-11-27T19:23:10.677Z] Copying: 903/1024 [MB] (14 MBps) [2024-11-27T19:23:11.645Z] Copying: 921/1024 [MB] (17 MBps) [2024-11-27T19:23:12.590Z] Copying: 938/1024 [MB] (17 MBps) [2024-11-27T19:23:13.536Z] Copying: 953/1024 [MB] (14 MBps) [2024-11-27T19:23:14.480Z] Copying: 968/1024 [MB] (14 MBps) [2024-11-27T19:23:15.425Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-27T19:23:16.369Z] Copying: 998/1024 [MB] (13 MBps) [2024-11-27T19:23:17.314Z] Copying: 1013/1024 [MB] (14 MBps) [2024-11-27T19:23:18.259Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-27T19:23:18.259Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 19:23:17.912561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.912644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:08.624 [2024-11-27 19:23:17.912671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:08.624 [2024-11-27 19:23:17.912681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:17.913101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.624 [2024-11-27 19:23:17.916190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.916235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:08.624 [2024-11-27 19:23:17.916247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:25:08.624 [2024-11-27 19:23:17.916256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:17.928362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.928417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:08.624 [2024-11-27 19:23:17.928430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.755 ms 00:25:08.624 [2024-11-27 19:23:17.928445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:17.952849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.952902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:08.624 [2024-11-27 19:23:17.952914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.385 ms 00:25:08.624 [2024-11-27 19:23:17.952922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:17.959117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.959165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:08.624 [2024-11-27 19:23:17.959176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:25:08.624 [2024-11-27 19:23:17.959192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:17.986197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:17.986245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:08.624 [2024-11-27 19:23:17.986257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.959 ms 00:25:08.624 [2024-11-27 19:23:17.986265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:18.002424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:18.002471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:08.624 [2024-11-27 19:23:18.002484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.111 ms 00:25:08.624 [2024-11-27 19:23:18.002493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:18.171404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:18.171457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:08.624 [2024-11-27 19:23:18.171470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.859 ms 00:25:08.624 [2024-11-27 19:23:18.171479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:18.197662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:18.197707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:08.624 [2024-11-27 19:23:18.197719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.158 ms 00:25:08.624 [2024-11-27 19:23:18.197727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:18.223525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:18.223571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:08.624 [2024-11-27 19:23:18.223582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.753 ms 00:25:08.624 [2024-11-27 19:23:18.223590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.624 [2024-11-27 19:23:18.248285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.624 [2024-11-27 19:23:18.248338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:08.624 [2024-11-27 19:23:18.248350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.652 ms 00:25:08.624 [2024-11-27 19:23:18.248358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-11-27 19:23:18.273314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.888 [2024-11-27 19:23:18.273359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:08.888 [2024-11-27 19:23:18.273370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.887 ms 00:25:08.888 [2024-11-27 19:23:18.273377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.888 [2024-11-27 19:23:18.273420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:08.888 [2024-11-27 19:23:18.273435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 90880 / 261120 wr_cnt: 1 state: open 00:25:08.888 [2024-11-27 19:23:18.273447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:08.888 [2024-11-27 19:23:18.273650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.273992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:08.889 [2024-11-27 19:23:18.274246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:08.889 [2024-11-27 19:23:18.274256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a739b9e0-5d65-489d-af21-aa598e8a13af 00:25:08.889 [2024-11-27 19:23:18.274265] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 90880 00:25:08.889 [2024-11-27 19:23:18.274274] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 91840 00:25:08.889 [2024-11-27 19:23:18.274281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 90880 00:25:08.889 [2024-11-27 19:23:18.274290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0106 00:25:08.889 [2024-11-27 19:23:18.274312] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:08.889 [2024-11-27 19:23:18.274321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:08.889 [2024-11-27 19:23:18.274329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:08.889 [2024-11-27 19:23:18.274336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:08.889 [2024-11-27 19:23:18.274343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:08.889 [2024-11-27 19:23:18.274352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-11-27 19:23:18.274360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:08.889 [2024-11-27 19:23:18.274369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:25:08.889 [2024-11-27 19:23:18.274377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-11-27 19:23:18.288192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-11-27 19:23:18.288236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:08.889 [2024-11-27 19:23:18.288254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.796 ms 00:25:08.889 [2024-11-27 19:23:18.288263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.889 [2024-11-27 19:23:18.288663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.889 [2024-11-27 19:23:18.288674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:08.889 [2024-11-27 19:23:18.288683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:25:08.890 [2024-11-27 19:23:18.288690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.325202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.325250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.890 [2024-11-27 19:23:18.325263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.325272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.325342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.325353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.890 [2024-11-27 19:23:18.325362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.325372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.325454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.325470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.890 [2024-11-27 19:23:18.325479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.325488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.325505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.325515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.890 [2024-11-27 19:23:18.325524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.325533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.410912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.411154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.890 [2024-11-27 19:23:18.411177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.411186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.480679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.480733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.890 [2024-11-27 19:23:18.480746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.480754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.480819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.480829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.890 [2024-11-27 19:23:18.480838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.480852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.480908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.480919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.890 [2024-11-27 19:23:18.480928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.480936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.481038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.481049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.890 [2024-11-27 19:23:18.481058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.481070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.481100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.481110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.890 [2024-11-27 19:23:18.481118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.481153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.481194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.481205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.890 [2024-11-27 19:23:18.481213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.481222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.481270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.890 [2024-11-27 19:23:18.481281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.890 [2024-11-27 19:23:18.481290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.890 [2024-11-27 19:23:18.481299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.890 [2024-11-27 19:23:18.481438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.380 ms, result 0 00:25:10.275 00:25:10.275 00:25:10.275 19:23:19 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:10.537 [2024-11-27 19:23:19.974490] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:10.537 [2024-11-27 19:23:19.974643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79605 ] 00:25:10.537 [2024-11-27 19:23:20.138387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.797 [2024-11-27 19:23:20.267678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.057 [2024-11-27 19:23:20.563659] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.057 [2024-11-27 19:23:20.563746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.318 [2024-11-27 19:23:20.724970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.725036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:11.318 [2024-11-27 19:23:20.725051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:11.318 [2024-11-27 19:23:20.725060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.725116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.725154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.318 [2024-11-27 19:23:20.725163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:11.318 [2024-11-27 19:23:20.725172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.725194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:11.318 [2024-11-27 19:23:20.725898] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:11.318 [2024-11-27 19:23:20.725940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.725949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.318 [2024-11-27 19:23:20.725957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:25:11.318 [2024-11-27 19:23:20.725965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.727745] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:11.318 [2024-11-27 19:23:20.742042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.742092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:11.318 [2024-11-27 19:23:20.742106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.300 ms 00:25:11.318 [2024-11-27 19:23:20.742115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.742216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.742227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:11.318 [2024-11-27 19:23:20.742237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:11.318 [2024-11-27 19:23:20.742244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.750392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.750435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.318 [2024-11-27 19:23:20.750445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.069 ms 00:25:11.318 [2024-11-27 19:23:20.750459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.750538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.750547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.318 [2024-11-27 19:23:20.750556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:11.318 [2024-11-27 19:23:20.750563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.750607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.750617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:11.318 [2024-11-27 19:23:20.750626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:11.318 [2024-11-27 19:23:20.750634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.750659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:11.318 [2024-11-27 19:23:20.754804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.754843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.318 [2024-11-27 19:23:20.754857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.150 ms 00:25:11.318 [2024-11-27 19:23:20.754866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.754924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.754933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:11.318 [2024-11-27 19:23:20.754943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:11.318 [2024-11-27 19:23:20.754950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.755001] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:11.318 [2024-11-27 19:23:20.755024] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:11.318 [2024-11-27 19:23:20.755061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:11.318 [2024-11-27 19:23:20.755080] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:11.318 [2024-11-27 19:23:20.755213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:11.318 [2024-11-27 19:23:20.755226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:11.318 [2024-11-27 19:23:20.755237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:11.318 [2024-11-27 19:23:20.755247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:11.318 [2024-11-27 19:23:20.755257] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:11.318 [2024-11-27 19:23:20.755266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:11.318 [2024-11-27 19:23:20.755274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:11.318 [2024-11-27 19:23:20.755285] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:11.318 [2024-11-27 19:23:20.755293] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:11.318 [2024-11-27 19:23:20.755302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.755310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:11.318 [2024-11-27 19:23:20.755319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:11.318 [2024-11-27 19:23:20.755326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.318 [2024-11-27 19:23:20.755411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.318 [2024-11-27 19:23:20.755420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:11.318 [2024-11-27 19:23:20.755428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:11.319 [2024-11-27 19:23:20.755435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.755541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:11.319 [2024-11-27 19:23:20.755551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:11.319 [2024-11-27 19:23:20.755560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:11.319 [2024-11-27 19:23:20.755583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:11.319 [2024-11-27 19:23:20.755604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.319 [2024-11-27 19:23:20.755619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:11.319 [2024-11-27 19:23:20.755626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:11.319 [2024-11-27 19:23:20.755634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.319 [2024-11-27 19:23:20.755647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:11.319 [2024-11-27 19:23:20.755654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:11.319 [2024-11-27 19:23:20.755663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:11.319 [2024-11-27 19:23:20.755678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:11.319 [2024-11-27 19:23:20.755700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:11.319 [2024-11-27 19:23:20.755720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:11.319 [2024-11-27 19:23:20.755739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:11.319 [2024-11-27 19:23:20.755759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:11.319 [2024-11-27 19:23:20.755778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.319 [2024-11-27 19:23:20.755790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:11.319 [2024-11-27 19:23:20.755797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:11.319 [2024-11-27 19:23:20.755803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.319 [2024-11-27 19:23:20.755810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:11.319 [2024-11-27 19:23:20.755817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:11.319 [2024-11-27 19:23:20.755823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:11.319 [2024-11-27 19:23:20.755836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:11.319 [2024-11-27 19:23:20.755842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:11.319 [2024-11-27 19:23:20.755856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:11.319 [2024-11-27 19:23:20.755863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.319 [2024-11-27 19:23:20.755883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:11.319 [2024-11-27 19:23:20.755890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:11.319 [2024-11-27 19:23:20.755897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:11.319 [2024-11-27 19:23:20.755904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:11.319 [2024-11-27 19:23:20.755911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:11.319 [2024-11-27 19:23:20.755917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:11.319 [2024-11-27 19:23:20.755926] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:11.319 [2024-11-27 19:23:20.755936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.755946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:11.319 [2024-11-27 19:23:20.755954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:11.319 [2024-11-27 19:23:20.755961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:11.319 [2024-11-27 19:23:20.755970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:11.319 [2024-11-27 19:23:20.755977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:11.319 [2024-11-27 19:23:20.755985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:11.319 [2024-11-27 19:23:20.755992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:11.319 [2024-11-27 19:23:20.755999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:11.319 [2024-11-27 19:23:20.756006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:11.319 [2024-11-27 19:23:20.756013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:11.319 [2024-11-27 19:23:20.756051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:11.319 [2024-11-27 19:23:20.756059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:11.319 [2024-11-27 19:23:20.756075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:11.319 [2024-11-27 19:23:20.756083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:11.319 [2024-11-27 19:23:20.756090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:11.319 [2024-11-27 19:23:20.756097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.756104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:11.319 [2024-11-27 19:23:20.756112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:25:11.319 [2024-11-27 19:23:20.756120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.787877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.787928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.319 [2024-11-27 19:23:20.787941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.698 ms 00:25:11.319 [2024-11-27 19:23:20.787953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.788040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.788049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:11.319 [2024-11-27 19:23:20.788058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:11.319 [2024-11-27 19:23:20.788066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.837225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.837279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.319 [2024-11-27 19:23:20.837293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.101 ms 00:25:11.319 [2024-11-27 19:23:20.837302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.837351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.837362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.319 [2024-11-27 19:23:20.837375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:11.319 [2024-11-27 19:23:20.837383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.319 [2024-11-27 19:23:20.837948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.319 [2024-11-27 19:23:20.837983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.319 [2024-11-27 19:23:20.837994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:25:11.319 [2024-11-27 19:23:20.838003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.838179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.838196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.320 [2024-11-27 19:23:20.838212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:25:11.320 [2024-11-27 19:23:20.838220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.853744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.853790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.320 [2024-11-27 19:23:20.853801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:25:11.320 [2024-11-27 19:23:20.853809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.868375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:11.320 [2024-11-27 19:23:20.868565] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:11.320 [2024-11-27 19:23:20.868585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.868595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:11.320 [2024-11-27 19:23:20.868604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.668 ms 00:25:11.320 [2024-11-27 19:23:20.868612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.894642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.894690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:11.320 [2024-11-27 19:23:20.894703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.985 ms 00:25:11.320 [2024-11-27 19:23:20.894711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.907871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.908041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:11.320 [2024-11-27 19:23:20.908062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:25:11.320 [2024-11-27 19:23:20.908070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.920760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.920806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:11.320 [2024-11-27 19:23:20.920818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:25:11.320 [2024-11-27 19:23:20.920825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.320 [2024-11-27 19:23:20.921503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.320 [2024-11-27 19:23:20.921529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:11.320 [2024-11-27 19:23:20.921544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:25:11.320 [2024-11-27 19:23:20.921552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.580 [2024-11-27 19:23:20.988363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.580 [2024-11-27 19:23:20.988422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:11.581 [2024-11-27 19:23:20.988444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.792 ms 00:25:11.581 [2024-11-27 19:23:20.988453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:20.999820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:11.581 [2024-11-27 19:23:21.002978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.003171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:11.581 [2024-11-27 19:23:21.003193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.470 ms 00:25:11.581 [2024-11-27 19:23:21.003202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.003289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.003301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:11.581 [2024-11-27 19:23:21.003314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:11.581 [2024-11-27 19:23:21.003323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.004916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.004963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:11.581 [2024-11-27 19:23:21.004974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:25:11.581 [2024-11-27 19:23:21.004983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.005010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.005019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:11.581 [2024-11-27 19:23:21.005028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:11.581 [2024-11-27 19:23:21.005037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.005081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:11.581 [2024-11-27 19:23:21.005092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.005101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:11.581 [2024-11-27 19:23:21.005109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:11.581 [2024-11-27 19:23:21.005117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.030656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.030839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:11.581 [2024-11-27 19:23:21.030867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.497 ms 00:25:11.581 [2024-11-27 19:23:21.030897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.030979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.581 [2024-11-27 19:23:21.030990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:11.581 [2024-11-27 19:23:21.031000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:11.581 [2024-11-27 19:23:21.031008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.581 [2024-11-27 19:23:21.032283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.807 ms, result 0 00:25:12.999  [2024-11-27T19:23:23.579Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T19:23:24.523Z] Copying: 32/1024 [MB] (19 MBps) [2024-11-27T19:23:25.468Z] Copying: 48/1024 [MB] (15 MBps) [2024-11-27T19:23:26.414Z] Copying: 65/1024 [MB] (17 MBps) [2024-11-27T19:23:27.360Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-27T19:23:28.306Z] Copying: 100/1024 [MB] (19 MBps) [2024-11-27T19:23:29.249Z] Copying: 125/1024 [MB] (25 MBps) [2024-11-27T19:23:30.634Z] Copying: 149/1024 [MB] (23 MBps) [2024-11-27T19:23:31.577Z] Copying: 172/1024 [MB] (23 MBps) [2024-11-27T19:23:32.521Z] Copying: 198/1024 [MB] (26 MBps) [2024-11-27T19:23:33.466Z] Copying: 218/1024 [MB] (19 MBps) [2024-11-27T19:23:34.413Z] Copying: 242/1024 [MB] (24 MBps) [2024-11-27T19:23:35.360Z] Copying: 255/1024 [MB] (13 MBps) [2024-11-27T19:23:36.304Z] Copying: 275/1024 [MB] (20 MBps) [2024-11-27T19:23:37.249Z] Copying: 292/1024 [MB] (16 MBps) [2024-11-27T19:23:38.636Z] Copying: 310/1024 [MB] (18 MBps) [2024-11-27T19:23:39.578Z] Copying: 327/1024 [MB] (16 MBps) [2024-11-27T19:23:40.522Z] Copying: 347/1024 [MB] (20 MBps) [2024-11-27T19:23:41.467Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-27T19:23:42.464Z] Copying: 383/1024 [MB] (20 MBps) [2024-11-27T19:23:43.412Z] Copying: 405/1024 [MB] (21 MBps) [2024-11-27T19:23:44.357Z] Copying: 429/1024 [MB] (23 MBps) [2024-11-27T19:23:45.302Z] Copying: 445/1024 [MB] (16 MBps) [2024-11-27T19:23:46.245Z] Copying: 458/1024 [MB] (13 MBps) [2024-11-27T19:23:47.634Z] Copying: 481/1024 [MB] (22 MBps) [2024-11-27T19:23:48.579Z] Copying: 501/1024 [MB] (19 MBps) [2024-11-27T19:23:49.523Z] Copying: 521/1024 [MB] (19 MBps) [2024-11-27T19:23:50.471Z] Copying: 539/1024 [MB] (18 MBps) [2024-11-27T19:23:51.415Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-27T19:23:52.360Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-27T19:23:53.303Z] Copying: 595/1024 [MB] (20 MBps) [2024-11-27T19:23:54.246Z] Copying: 609/1024 [MB] (14 MBps) [2024-11-27T19:23:55.633Z] Copying: 631/1024 [MB] (21 MBps) [2024-11-27T19:23:56.578Z] Copying: 648/1024 [MB] (16 MBps) [2024-11-27T19:23:57.523Z] Copying: 670/1024 [MB] (21 MBps) [2024-11-27T19:23:58.467Z] Copying: 683/1024 [MB] (13 MBps) [2024-11-27T19:23:59.409Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-27T19:24:00.350Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-27T19:24:01.342Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-27T19:24:02.284Z] Copying: 737/1024 [MB] (19 MBps) [2024-11-27T19:24:03.229Z] Copying: 757/1024 [MB] (20 MBps) [2024-11-27T19:24:04.614Z] Copying: 771/1024 [MB] (13 MBps) [2024-11-27T19:24:05.558Z] Copying: 787/1024 [MB] (16 MBps) [2024-11-27T19:24:06.504Z] Copying: 803/1024 [MB] (15 MBps) [2024-11-27T19:24:07.450Z] Copying: 819/1024 [MB] (15 MBps) [2024-11-27T19:24:08.395Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-27T19:24:09.343Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-27T19:24:10.286Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-27T19:24:11.230Z] Copying: 869/1024 [MB] (18 MBps) [2024-11-27T19:24:12.617Z] Copying: 883/1024 [MB] (13 MBps) [2024-11-27T19:24:13.560Z] Copying: 893/1024 [MB] (10 MBps) [2024-11-27T19:24:14.589Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-27T19:24:15.532Z] Copying: 919/1024 [MB] (15 MBps) [2024-11-27T19:24:16.478Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-27T19:24:17.419Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-27T19:24:18.363Z] Copying: 958/1024 [MB] (17 MBps) [2024-11-27T19:24:19.313Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-27T19:24:20.257Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-27T19:24:21.646Z] Copying: 1003/1024 [MB] (12 MBps) [2024-11-27T19:24:21.646Z] Copying: 1022/1024 [MB] (18 MBps) [2024-11-27T19:24:21.646Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 19:24:21.621070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.011 [2024-11-27 19:24:21.621164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:12.011 [2024-11-27 19:24:21.621188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:12.011 [2024-11-27 19:24:21.621198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.011 [2024-11-27 19:24:21.621223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:12.011 [2024-11-27 19:24:21.624327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.012 [2024-11-27 19:24:21.624367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:12.012 [2024-11-27 19:24:21.624379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:26:12.012 [2024-11-27 19:24:21.624387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.012 [2024-11-27 19:24:21.624629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.012 [2024-11-27 19:24:21.624640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:12.012 [2024-11-27 19:24:21.624650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:26:12.012 [2024-11-27 19:24:21.624662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.012 [2024-11-27 19:24:21.632683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.012 [2024-11-27 19:24:21.632733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:12.012 [2024-11-27 19:24:21.632747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:26:12.012 [2024-11-27 19:24:21.632755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.012 [2024-11-27 19:24:21.639287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.012 [2024-11-27 19:24:21.639477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:12.012 [2024-11-27 19:24:21.639497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.486 ms 00:26:12.012 [2024-11-27 19:24:21.639514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.273 [2024-11-27 19:24:21.667563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.273 [2024-11-27 19:24:21.667611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:12.273 [2024-11-27 19:24:21.667623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.997 ms 00:26:12.273 [2024-11-27 19:24:21.667631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.274 [2024-11-27 19:24:21.683996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.274 [2024-11-27 19:24:21.684040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:12.274 [2024-11-27 19:24:21.684053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.321 ms 00:26:12.274 [2024-11-27 19:24:21.684062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:21.993335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.536 [2024-11-27 19:24:21.993407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:12.536 [2024-11-27 19:24:21.993422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 309.240 ms 00:26:12.536 [2024-11-27 19:24:21.993431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:22.019402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.536 [2024-11-27 19:24:22.019449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:12.536 [2024-11-27 19:24:22.019463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.955 ms 00:26:12.536 [2024-11-27 19:24:22.019471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:22.043770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.536 [2024-11-27 19:24:22.043814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:12.536 [2024-11-27 19:24:22.043826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:26:12.536 [2024-11-27 19:24:22.043834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:22.068263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.536 [2024-11-27 19:24:22.068306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:12.536 [2024-11-27 19:24:22.068319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.388 ms 00:26:12.536 [2024-11-27 19:24:22.068326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:22.092147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.536 [2024-11-27 19:24:22.092190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:12.536 [2024-11-27 19:24:22.092202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.752 ms 00:26:12.536 [2024-11-27 19:24:22.092210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.536 [2024-11-27 19:24:22.092252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:12.536 [2024-11-27 19:24:22.092268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:12.536 [2024-11-27 19:24:22.092280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:12.536 [2024-11-27 19:24:22.092556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.092987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:12.537 [2024-11-27 19:24:22.093053] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:12.537 [2024-11-27 19:24:22.093062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a739b9e0-5d65-489d-af21-aa598e8a13af 00:26:12.537 [2024-11-27 19:24:22.093071] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:12.537 [2024-11-27 19:24:22.093079] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 41152 00:26:12.537 [2024-11-27 19:24:22.093086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 40192 00:26:12.537 [2024-11-27 19:24:22.093095] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0239 00:26:12.537 [2024-11-27 19:24:22.093108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:12.537 [2024-11-27 19:24:22.093143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:12.537 [2024-11-27 19:24:22.093153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:12.538 [2024-11-27 19:24:22.093160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:12.538 [2024-11-27 19:24:22.093167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:12.538 [2024-11-27 19:24:22.093176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.538 [2024-11-27 19:24:22.093184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:12.538 [2024-11-27 19:24:22.093194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:26:12.538 [2024-11-27 19:24:22.093201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.106723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.538 [2024-11-27 19:24:22.106762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:12.538 [2024-11-27 19:24:22.106781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.504 ms 00:26:12.538 [2024-11-27 19:24:22.106788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.107222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.538 [2024-11-27 19:24:22.107235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:12.538 [2024-11-27 19:24:22.107245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:26:12.538 [2024-11-27 19:24:22.107253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.143429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.538 [2024-11-27 19:24:22.143474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.538 [2024-11-27 19:24:22.143486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.538 [2024-11-27 19:24:22.143496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.143562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.538 [2024-11-27 19:24:22.143572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.538 [2024-11-27 19:24:22.143582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.538 [2024-11-27 19:24:22.143591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.143688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.538 [2024-11-27 19:24:22.143700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.538 [2024-11-27 19:24:22.143713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.538 [2024-11-27 19:24:22.143722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.538 [2024-11-27 19:24:22.143739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.538 [2024-11-27 19:24:22.143748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.538 [2024-11-27 19:24:22.143757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.538 [2024-11-27 19:24:22.143765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.228208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.228272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.799 [2024-11-27 19:24:22.228287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.228295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.296863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.296918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.799 [2024-11-27 19:24:22.296932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.296941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.799 [2024-11-27 19:24:22.297021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.799 [2024-11-27 19:24:22.297113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.799 [2024-11-27 19:24:22.297293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.799 [2024-11-27 19:24:22.297358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.799 [2024-11-27 19:24:22.297425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.799 [2024-11-27 19:24:22.297492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.799 [2024-11-27 19:24:22.297500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.799 [2024-11-27 19:24:22.297509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.799 [2024-11-27 19:24:22.297643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 676.536 ms, result 0 00:26:13.743 00:26:13.743 00:26:13.743 19:24:23 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.661 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:15.661 19:24:25 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:15.661 19:24:25 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:15.661 19:24:25 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:15.922 Process with pid 77339 is not found 00:26:15.922 Remove shared memory files 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77339 00:26:15.922 19:24:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77339 ']' 00:26:15.922 19:24:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77339 00:26:15.922 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77339) - No such process 00:26:15.922 19:24:25 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77339 is not found' 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:15.922 19:24:25 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:15.922 ************************************ 00:26:15.922 END TEST ftl_restore 00:26:15.922 ************************************ 00:26:15.922 00:26:15.922 real 4m45.587s 00:26:15.922 user 4m32.990s 00:26:15.922 sys 0m12.418s 00:26:15.922 19:24:25 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.922 19:24:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:15.922 19:24:25 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:15.922 19:24:25 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:15.922 19:24:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.922 19:24:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:15.922 ************************************ 00:26:15.922 START TEST ftl_dirty_shutdown 00:26:15.922 ************************************ 00:26:15.922 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:15.922 * Looking for test storage... 00:26:16.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:16.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.184 --rc genhtml_branch_coverage=1 00:26:16.184 --rc genhtml_function_coverage=1 00:26:16.184 --rc genhtml_legend=1 00:26:16.184 --rc geninfo_all_blocks=1 00:26:16.184 --rc geninfo_unexecuted_blocks=1 00:26:16.184 00:26:16.184 ' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:16.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.184 --rc genhtml_branch_coverage=1 00:26:16.184 --rc genhtml_function_coverage=1 00:26:16.184 --rc genhtml_legend=1 00:26:16.184 --rc geninfo_all_blocks=1 00:26:16.184 --rc geninfo_unexecuted_blocks=1 00:26:16.184 00:26:16.184 ' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:16.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.184 --rc genhtml_branch_coverage=1 00:26:16.184 --rc genhtml_function_coverage=1 00:26:16.184 --rc genhtml_legend=1 00:26:16.184 --rc geninfo_all_blocks=1 00:26:16.184 --rc geninfo_unexecuted_blocks=1 00:26:16.184 00:26:16.184 ' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:16.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.184 --rc genhtml_branch_coverage=1 00:26:16.184 --rc genhtml_function_coverage=1 00:26:16.184 --rc genhtml_legend=1 00:26:16.184 --rc geninfo_all_blocks=1 00:26:16.184 --rc geninfo_unexecuted_blocks=1 00:26:16.184 00:26:16.184 ' 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.184 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80336 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80336 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80336 ']' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.185 19:24:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.185 [2024-11-27 19:24:25.752418] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:16.185 [2024-11-27 19:24:25.752820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80336 ] 00:26:16.447 [2024-11-27 19:24:25.918392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.447 [2024-11-27 19:24:26.040161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:17.389 19:24:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:17.651 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:17.651 { 00:26:17.651 "name": "nvme0n1", 00:26:17.651 "aliases": [ 00:26:17.651 "e8b0f8d1-20a9-41a9-8796-74b3850c1880" 00:26:17.651 ], 00:26:17.651 "product_name": "NVMe disk", 00:26:17.651 "block_size": 4096, 00:26:17.651 "num_blocks": 1310720, 00:26:17.651 "uuid": "e8b0f8d1-20a9-41a9-8796-74b3850c1880", 00:26:17.651 "numa_id": -1, 00:26:17.651 "assigned_rate_limits": { 00:26:17.651 "rw_ios_per_sec": 0, 00:26:17.651 "rw_mbytes_per_sec": 0, 00:26:17.651 "r_mbytes_per_sec": 0, 00:26:17.651 "w_mbytes_per_sec": 0 00:26:17.651 }, 00:26:17.651 "claimed": true, 00:26:17.651 "claim_type": "read_many_write_one", 00:26:17.651 "zoned": false, 00:26:17.651 "supported_io_types": { 00:26:17.651 "read": true, 00:26:17.651 "write": true, 00:26:17.652 "unmap": true, 00:26:17.652 "flush": true, 00:26:17.652 "reset": true, 00:26:17.652 "nvme_admin": true, 00:26:17.652 "nvme_io": true, 00:26:17.652 "nvme_io_md": false, 00:26:17.652 "write_zeroes": true, 00:26:17.652 "zcopy": false, 00:26:17.652 "get_zone_info": false, 00:26:17.652 "zone_management": false, 00:26:17.652 "zone_append": false, 00:26:17.652 "compare": true, 00:26:17.652 "compare_and_write": false, 00:26:17.652 "abort": true, 00:26:17.652 "seek_hole": false, 00:26:17.652 "seek_data": false, 00:26:17.652 "copy": true, 00:26:17.652 "nvme_iov_md": false 00:26:17.652 }, 00:26:17.652 "driver_specific": { 00:26:17.652 "nvme": [ 00:26:17.652 { 00:26:17.652 "pci_address": "0000:00:11.0", 00:26:17.652 "trid": { 00:26:17.652 "trtype": "PCIe", 00:26:17.652 "traddr": "0000:00:11.0" 00:26:17.652 }, 00:26:17.652 "ctrlr_data": { 00:26:17.652 "cntlid": 0, 00:26:17.652 "vendor_id": "0x1b36", 00:26:17.652 "model_number": "QEMU NVMe Ctrl", 00:26:17.652 "serial_number": "12341", 00:26:17.652 "firmware_revision": "8.0.0", 00:26:17.652 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:17.652 "oacs": { 00:26:17.652 "security": 0, 00:26:17.652 "format": 1, 00:26:17.652 "firmware": 0, 00:26:17.652 "ns_manage": 1 00:26:17.652 }, 00:26:17.652 "multi_ctrlr": false, 00:26:17.652 "ana_reporting": false 00:26:17.652 }, 00:26:17.652 "vs": { 00:26:17.652 "nvme_version": "1.4" 00:26:17.652 }, 00:26:17.652 "ns_data": { 00:26:17.652 "id": 1, 00:26:17.652 "can_share": false 00:26:17.652 } 00:26:17.652 } 00:26:17.652 ], 00:26:17.652 "mp_policy": "active_passive" 00:26:17.652 } 00:26:17.652 } 00:26:17.652 ]' 00:26:17.652 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:17.913 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:18.178 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=243a8426-2a59-4829-b5b3-4d5cb01d4a65 00:26:18.178 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:18.178 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 243a8426-2a59-4829-b5b3-4d5cb01d4a65 00:26:18.178 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:18.438 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e08b2492-637e-4844-8bbd-696cb073d8a0 00:26:18.438 19:24:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e08b2492-637e-4844-8bbd-696cb073d8a0 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:18.699 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:18.958 { 00:26:18.958 "name": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:18.958 "aliases": [ 00:26:18.958 "lvs/nvme0n1p0" 00:26:18.958 ], 00:26:18.958 "product_name": "Logical Volume", 00:26:18.958 "block_size": 4096, 00:26:18.958 "num_blocks": 26476544, 00:26:18.958 "uuid": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:18.958 "assigned_rate_limits": { 00:26:18.958 "rw_ios_per_sec": 0, 00:26:18.958 "rw_mbytes_per_sec": 0, 00:26:18.958 "r_mbytes_per_sec": 0, 00:26:18.958 "w_mbytes_per_sec": 0 00:26:18.958 }, 00:26:18.958 "claimed": false, 00:26:18.958 "zoned": false, 00:26:18.958 "supported_io_types": { 00:26:18.958 "read": true, 00:26:18.958 "write": true, 00:26:18.958 "unmap": true, 00:26:18.958 "flush": false, 00:26:18.958 "reset": true, 00:26:18.958 "nvme_admin": false, 00:26:18.958 "nvme_io": false, 00:26:18.958 "nvme_io_md": false, 00:26:18.958 "write_zeroes": true, 00:26:18.958 "zcopy": false, 00:26:18.958 "get_zone_info": false, 00:26:18.958 "zone_management": false, 00:26:18.958 "zone_append": false, 00:26:18.958 "compare": false, 00:26:18.958 "compare_and_write": false, 00:26:18.958 "abort": false, 00:26:18.958 "seek_hole": true, 00:26:18.958 "seek_data": true, 00:26:18.958 "copy": false, 00:26:18.958 "nvme_iov_md": false 00:26:18.958 }, 00:26:18.958 "driver_specific": { 00:26:18.958 "lvol": { 00:26:18.958 "lvol_store_uuid": "e08b2492-637e-4844-8bbd-696cb073d8a0", 00:26:18.958 "base_bdev": "nvme0n1", 00:26:18.958 "thin_provision": true, 00:26:18.958 "num_allocated_clusters": 0, 00:26:18.958 "snapshot": false, 00:26:18.958 "clone": false, 00:26:18.958 "esnap_clone": false 00:26:18.958 } 00:26:18.958 } 00:26:18.958 } 00:26:18.958 ]' 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:18.958 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:19.219 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:19.480 { 00:26:19.480 "name": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:19.480 "aliases": [ 00:26:19.480 "lvs/nvme0n1p0" 00:26:19.480 ], 00:26:19.480 "product_name": "Logical Volume", 00:26:19.480 "block_size": 4096, 00:26:19.480 "num_blocks": 26476544, 00:26:19.480 "uuid": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:19.480 "assigned_rate_limits": { 00:26:19.480 "rw_ios_per_sec": 0, 00:26:19.480 "rw_mbytes_per_sec": 0, 00:26:19.480 "r_mbytes_per_sec": 0, 00:26:19.480 "w_mbytes_per_sec": 0 00:26:19.480 }, 00:26:19.480 "claimed": false, 00:26:19.480 "zoned": false, 00:26:19.480 "supported_io_types": { 00:26:19.480 "read": true, 00:26:19.480 "write": true, 00:26:19.480 "unmap": true, 00:26:19.480 "flush": false, 00:26:19.480 "reset": true, 00:26:19.480 "nvme_admin": false, 00:26:19.480 "nvme_io": false, 00:26:19.480 "nvme_io_md": false, 00:26:19.480 "write_zeroes": true, 00:26:19.480 "zcopy": false, 00:26:19.480 "get_zone_info": false, 00:26:19.480 "zone_management": false, 00:26:19.480 "zone_append": false, 00:26:19.480 "compare": false, 00:26:19.480 "compare_and_write": false, 00:26:19.480 "abort": false, 00:26:19.480 "seek_hole": true, 00:26:19.480 "seek_data": true, 00:26:19.480 "copy": false, 00:26:19.480 "nvme_iov_md": false 00:26:19.480 }, 00:26:19.480 "driver_specific": { 00:26:19.480 "lvol": { 00:26:19.480 "lvol_store_uuid": "e08b2492-637e-4844-8bbd-696cb073d8a0", 00:26:19.480 "base_bdev": "nvme0n1", 00:26:19.480 "thin_provision": true, 00:26:19.480 "num_allocated_clusters": 0, 00:26:19.480 "snapshot": false, 00:26:19.480 "clone": false, 00:26:19.480 "esnap_clone": false 00:26:19.480 } 00:26:19.480 } 00:26:19.480 } 00:26:19.480 ]' 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:19.480 19:24:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:19.747 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea0f5982-6e34-4c10-a117-d3e7128020b2 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:20.008 { 00:26:20.008 "name": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:20.008 "aliases": [ 00:26:20.008 "lvs/nvme0n1p0" 00:26:20.008 ], 00:26:20.008 "product_name": "Logical Volume", 00:26:20.008 "block_size": 4096, 00:26:20.008 "num_blocks": 26476544, 00:26:20.008 "uuid": "ea0f5982-6e34-4c10-a117-d3e7128020b2", 00:26:20.008 "assigned_rate_limits": { 00:26:20.008 "rw_ios_per_sec": 0, 00:26:20.008 "rw_mbytes_per_sec": 0, 00:26:20.008 "r_mbytes_per_sec": 0, 00:26:20.008 "w_mbytes_per_sec": 0 00:26:20.008 }, 00:26:20.008 "claimed": false, 00:26:20.008 "zoned": false, 00:26:20.008 "supported_io_types": { 00:26:20.008 "read": true, 00:26:20.008 "write": true, 00:26:20.008 "unmap": true, 00:26:20.008 "flush": false, 00:26:20.008 "reset": true, 00:26:20.008 "nvme_admin": false, 00:26:20.008 "nvme_io": false, 00:26:20.008 "nvme_io_md": false, 00:26:20.008 "write_zeroes": true, 00:26:20.008 "zcopy": false, 00:26:20.008 "get_zone_info": false, 00:26:20.008 "zone_management": false, 00:26:20.008 "zone_append": false, 00:26:20.008 "compare": false, 00:26:20.008 "compare_and_write": false, 00:26:20.008 "abort": false, 00:26:20.008 "seek_hole": true, 00:26:20.008 "seek_data": true, 00:26:20.008 "copy": false, 00:26:20.008 "nvme_iov_md": false 00:26:20.008 }, 00:26:20.008 "driver_specific": { 00:26:20.008 "lvol": { 00:26:20.008 "lvol_store_uuid": "e08b2492-637e-4844-8bbd-696cb073d8a0", 00:26:20.008 "base_bdev": "nvme0n1", 00:26:20.008 "thin_provision": true, 00:26:20.008 "num_allocated_clusters": 0, 00:26:20.008 "snapshot": false, 00:26:20.008 "clone": false, 00:26:20.008 "esnap_clone": false 00:26:20.008 } 00:26:20.008 } 00:26:20.008 } 00:26:20.008 ]' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ea0f5982-6e34-4c10-a117-d3e7128020b2 --l2p_dram_limit 10' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:20.008 19:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ea0f5982-6e34-4c10-a117-d3e7128020b2 --l2p_dram_limit 10 -c nvc0n1p0 00:26:20.270 [2024-11-27 19:24:29.643789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.643829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:20.270 [2024-11-27 19:24:29.643842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:20.270 [2024-11-27 19:24:29.643849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.643895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.643903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:20.270 [2024-11-27 19:24:29.643911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:20.270 [2024-11-27 19:24:29.643917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.643933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:20.270 [2024-11-27 19:24:29.644506] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:20.270 [2024-11-27 19:24:29.644523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.644529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:20.270 [2024-11-27 19:24:29.644537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:26:20.270 [2024-11-27 19:24:29.644543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.644571] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ad8a3954-0671-4961-b6b8-ebecf9396cce 00:26:20.270 [2024-11-27 19:24:29.645522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.645624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:20.270 [2024-11-27 19:24:29.645638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:20.270 [2024-11-27 19:24:29.645647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.650415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.650446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.270 [2024-11-27 19:24:29.650453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:26:20.270 [2024-11-27 19:24:29.650460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.650527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.650535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.270 [2024-11-27 19:24:29.650541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:20.270 [2024-11-27 19:24:29.650551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.650591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.650600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:20.270 [2024-11-27 19:24:29.650608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:20.270 [2024-11-27 19:24:29.650614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.650630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:20.270 [2024-11-27 19:24:29.653503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.653527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.270 [2024-11-27 19:24:29.653537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:26:20.270 [2024-11-27 19:24:29.653543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.653569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.653575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:20.270 [2024-11-27 19:24:29.653583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:20.270 [2024-11-27 19:24:29.653589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.653602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:20.270 [2024-11-27 19:24:29.653707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:20.270 [2024-11-27 19:24:29.653719] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:20.270 [2024-11-27 19:24:29.653728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:20.270 [2024-11-27 19:24:29.653737] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:20.270 [2024-11-27 19:24:29.653744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:20.270 [2024-11-27 19:24:29.653751] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:20.270 [2024-11-27 19:24:29.653758] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:20.270 [2024-11-27 19:24:29.653765] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:20.270 [2024-11-27 19:24:29.653771] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:20.270 [2024-11-27 19:24:29.653778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.653788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:20.270 [2024-11-27 19:24:29.653795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:26:20.270 [2024-11-27 19:24:29.653801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.653867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.270 [2024-11-27 19:24:29.653874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:20.270 [2024-11-27 19:24:29.653881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:20.270 [2024-11-27 19:24:29.653886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.270 [2024-11-27 19:24:29.653966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:20.270 [2024-11-27 19:24:29.653973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:20.270 [2024-11-27 19:24:29.653980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.270 [2024-11-27 19:24:29.653986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.270 [2024-11-27 19:24:29.653993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:20.270 [2024-11-27 19:24:29.653998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:20.270 [2024-11-27 19:24:29.654016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.270 [2024-11-27 19:24:29.654027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:20.270 [2024-11-27 19:24:29.654033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:20.270 [2024-11-27 19:24:29.654040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.270 [2024-11-27 19:24:29.654045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:20.270 [2024-11-27 19:24:29.654051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:20.270 [2024-11-27 19:24:29.654056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:20.270 [2024-11-27 19:24:29.654070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:20.270 [2024-11-27 19:24:29.654089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:20.270 [2024-11-27 19:24:29.654105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:20.270 [2024-11-27 19:24:29.654137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:20.270 [2024-11-27 19:24:29.654155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.270 [2024-11-27 19:24:29.654167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:20.270 [2024-11-27 19:24:29.654174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.270 [2024-11-27 19:24:29.654186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:20.270 [2024-11-27 19:24:29.654191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:20.270 [2024-11-27 19:24:29.654197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.270 [2024-11-27 19:24:29.654203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:20.270 [2024-11-27 19:24:29.654215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:20.270 [2024-11-27 19:24:29.654220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.270 [2024-11-27 19:24:29.654227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:20.271 [2024-11-27 19:24:29.654232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:20.271 [2024-11-27 19:24:29.654238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.271 [2024-11-27 19:24:29.654242] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:20.271 [2024-11-27 19:24:29.654250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:20.271 [2024-11-27 19:24:29.654255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.271 [2024-11-27 19:24:29.654262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.271 [2024-11-27 19:24:29.654268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:20.271 [2024-11-27 19:24:29.654276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:20.271 [2024-11-27 19:24:29.654281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:20.271 [2024-11-27 19:24:29.654288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:20.271 [2024-11-27 19:24:29.654293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:20.271 [2024-11-27 19:24:29.654299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:20.271 [2024-11-27 19:24:29.654307] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:20.271 [2024-11-27 19:24:29.654317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:20.271 [2024-11-27 19:24:29.654330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:20.271 [2024-11-27 19:24:29.654335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:20.271 [2024-11-27 19:24:29.654342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:20.271 [2024-11-27 19:24:29.654347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:20.271 [2024-11-27 19:24:29.654354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:20.271 [2024-11-27 19:24:29.654359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:20.271 [2024-11-27 19:24:29.654365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:20.271 [2024-11-27 19:24:29.654371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:20.271 [2024-11-27 19:24:29.654378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:20.271 [2024-11-27 19:24:29.654408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:20.271 [2024-11-27 19:24:29.654415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:20.271 [2024-11-27 19:24:29.654428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:20.271 [2024-11-27 19:24:29.654433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:20.271 [2024-11-27 19:24:29.654440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:20.271 [2024-11-27 19:24:29.654445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.271 [2024-11-27 19:24:29.654452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:20.271 [2024-11-27 19:24:29.654458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:26:20.271 [2024-11-27 19:24:29.654464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.271 [2024-11-27 19:24:29.654504] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:20.271 [2024-11-27 19:24:29.654519] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:24.482 [2024-11-27 19:24:33.782172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.782487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:24.482 [2024-11-27 19:24:33.782516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4127.651 ms 00:26:24.482 [2024-11-27 19:24:33.782528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.814373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.814435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:24.482 [2024-11-27 19:24:33.814449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.602 ms 00:26:24.482 [2024-11-27 19:24:33.814460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.814600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.814615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:24.482 [2024-11-27 19:24:33.814625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:24.482 [2024-11-27 19:24:33.814641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.850057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.850292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.482 [2024-11-27 19:24:33.850315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.362 ms 00:26:24.482 [2024-11-27 19:24:33.850326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.850370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.850382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.482 [2024-11-27 19:24:33.850391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:24.482 [2024-11-27 19:24:33.850410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.850969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.850997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.482 [2024-11-27 19:24:33.851007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:26:24.482 [2024-11-27 19:24:33.851018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.851153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.851169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.482 [2024-11-27 19:24:33.851180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:24.482 [2024-11-27 19:24:33.851193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.868509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.868555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.482 [2024-11-27 19:24:33.868566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.295 ms 00:26:24.482 [2024-11-27 19:24:33.868577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:33.895169] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:24.482 [2024-11-27 19:24:33.898978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:33.899025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:24.482 [2024-11-27 19:24:33.899042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.313 ms 00:26:24.482 [2024-11-27 19:24:33.899052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:34.001447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:34.001667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:24.482 [2024-11-27 19:24:34.001697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.343 ms 00:26:24.482 [2024-11-27 19:24:34.001707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:34.002217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:34.002257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:24.482 [2024-11-27 19:24:34.002275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:26:24.482 [2024-11-27 19:24:34.002287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:34.028958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:34.029182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:24.482 [2024-11-27 19:24:34.029214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.596 ms 00:26:24.482 [2024-11-27 19:24:34.029224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:34.055040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:34.055089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:24.482 [2024-11-27 19:24:34.055106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.757 ms 00:26:24.482 [2024-11-27 19:24:34.055114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.482 [2024-11-27 19:24:34.055752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.482 [2024-11-27 19:24:34.055790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:24.482 [2024-11-27 19:24:34.055806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:26:24.482 [2024-11-27 19:24:34.055815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.145350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.145402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:24.743 [2024-11-27 19:24:34.145422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.468 ms 00:26:24.743 [2024-11-27 19:24:34.145431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.173175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.173224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:24.743 [2024-11-27 19:24:34.173240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.644 ms 00:26:24.743 [2024-11-27 19:24:34.173248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.199575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.199754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:24.743 [2024-11-27 19:24:34.199780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.270 ms 00:26:24.743 [2024-11-27 19:24:34.199788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.226419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.226606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:24.743 [2024-11-27 19:24:34.226634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:26:24.743 [2024-11-27 19:24:34.226643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.226692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.226702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:24.743 [2024-11-27 19:24:34.226716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.743 [2024-11-27 19:24:34.226725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.226834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.743 [2024-11-27 19:24:34.226848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:24.743 [2024-11-27 19:24:34.226859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:24.743 [2024-11-27 19:24:34.226867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.743 [2024-11-27 19:24:34.228073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4583.780 ms, result 0 00:26:24.743 { 00:26:24.743 "name": "ftl0", 00:26:24.743 "uuid": "ad8a3954-0671-4961-b6b8-ebecf9396cce" 00:26:24.743 } 00:26:24.743 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:24.743 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:25.004 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:25.004 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:25.004 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:25.264 /dev/nbd0 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:25.264 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:25.265 1+0 records in 00:26:25.265 1+0 records out 00:26:25.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557321 s, 7.3 MB/s 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:25.265 19:24:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:25.265 [2024-11-27 19:24:34.801472] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:25.265 [2024-11-27 19:24:34.801809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80489 ] 00:26:25.525 [2024-11-27 19:24:34.965433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.525 [2024-11-27 19:24:35.090431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.912  [2024-11-27T19:24:37.491Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-27T19:24:38.433Z] Copying: 372/1024 [MB] (187 MBps) [2024-11-27T19:24:39.375Z] Copying: 566/1024 [MB] (194 MBps) [2024-11-27T19:24:40.757Z] Copying: 760/1024 [MB] (193 MBps) [2024-11-27T19:24:40.757Z] Copying: 975/1024 [MB] (214 MBps) [2024-11-27T19:24:41.330Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:26:31.695 00:26:31.695 19:24:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:33.635 19:24:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:33.896 [2024-11-27 19:24:43.305538] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:33.896 [2024-11-27 19:24:43.305750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80582 ] 00:26:33.896 [2024-11-27 19:24:43.460092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.156 [2024-11-27 19:24:43.561040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.539  [2024-11-27T19:24:46.119Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-27T19:24:47.106Z] Copying: 35/1024 [MB] (21 MBps) [2024-11-27T19:24:48.046Z] Copying: 60/1024 [MB] (24 MBps) [2024-11-27T19:24:48.987Z] Copying: 82/1024 [MB] (21 MBps) [2024-11-27T19:24:49.928Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-27T19:24:50.869Z] Copying: 134/1024 [MB] (24 MBps) [2024-11-27T19:24:51.813Z] Copying: 160/1024 [MB] (26 MBps) [2024-11-27T19:24:53.194Z] Copying: 181/1024 [MB] (20 MBps) [2024-11-27T19:24:54.130Z] Copying: 206/1024 [MB] (25 MBps) [2024-11-27T19:24:55.070Z] Copying: 240/1024 [MB] (34 MBps) [2024-11-27T19:24:56.012Z] Copying: 262/1024 [MB] (21 MBps) [2024-11-27T19:24:56.955Z] Copying: 286/1024 [MB] (24 MBps) [2024-11-27T19:24:57.890Z] Copying: 311/1024 [MB] (24 MBps) [2024-11-27T19:24:58.825Z] Copying: 342/1024 [MB] (30 MBps) [2024-11-27T19:25:00.207Z] Copying: 373/1024 [MB] (30 MBps) [2024-11-27T19:25:01.147Z] Copying: 392/1024 [MB] (19 MBps) [2024-11-27T19:25:02.082Z] Copying: 416/1024 [MB] (23 MBps) [2024-11-27T19:25:03.025Z] Copying: 448/1024 [MB] (32 MBps) [2024-11-27T19:25:03.959Z] Copying: 469/1024 [MB] (21 MBps) [2024-11-27T19:25:04.901Z] Copying: 497/1024 [MB] (28 MBps) [2024-11-27T19:25:05.841Z] Copying: 525/1024 [MB] (27 MBps) [2024-11-27T19:25:07.221Z] Copying: 550/1024 [MB] (24 MBps) [2024-11-27T19:25:08.157Z] Copying: 574/1024 [MB] (24 MBps) [2024-11-27T19:25:09.094Z] Copying: 601/1024 [MB] (26 MBps) [2024-11-27T19:25:10.035Z] Copying: 634/1024 [MB] (33 MBps) [2024-11-27T19:25:10.974Z] Copying: 656/1024 [MB] (22 MBps) [2024-11-27T19:25:11.913Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-27T19:25:12.857Z] Copying: 702/1024 [MB] (19 MBps) [2024-11-27T19:25:14.243Z] Copying: 723/1024 [MB] (21 MBps) [2024-11-27T19:25:14.814Z] Copying: 747/1024 [MB] (24 MBps) [2024-11-27T19:25:16.191Z] Copying: 768/1024 [MB] (20 MBps) [2024-11-27T19:25:17.135Z] Copying: 798/1024 [MB] (30 MBps) [2024-11-27T19:25:18.071Z] Copying: 818/1024 [MB] (19 MBps) [2024-11-27T19:25:19.093Z] Copying: 847/1024 [MB] (29 MBps) [2024-11-27T19:25:20.072Z] Copying: 880/1024 [MB] (32 MBps) [2024-11-27T19:25:21.006Z] Copying: 902/1024 [MB] (22 MBps) [2024-11-27T19:25:21.938Z] Copying: 932/1024 [MB] (29 MBps) [2024-11-27T19:25:22.875Z] Copying: 966/1024 [MB] (33 MBps) [2024-11-27T19:25:23.813Z] Copying: 996/1024 [MB] (29 MBps) [2024-11-27T19:25:24.380Z] Copying: 1024/1024 [MB] (average 25 MBps) 00:27:14.745 00:27:14.745 19:25:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:14.745 19:25:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:15.004 19:25:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:15.265 [2024-11-27 19:25:24.751863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.751909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:15.265 [2024-11-27 19:25:24.751922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:15.265 [2024-11-27 19:25:24.751933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.751952] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:15.265 [2024-11-27 19:25:24.754224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.754252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:15.265 [2024-11-27 19:25:24.754263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:27:15.265 [2024-11-27 19:25:24.754271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.756983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.757012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:15.265 [2024-11-27 19:25:24.757023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:27:15.265 [2024-11-27 19:25:24.757029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.772609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.772636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:15.265 [2024-11-27 19:25:24.772648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.559 ms 00:27:15.265 [2024-11-27 19:25:24.772654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.777723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.777747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:15.265 [2024-11-27 19:25:24.777758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.038 ms 00:27:15.265 [2024-11-27 19:25:24.777765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.797239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.797265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:15.265 [2024-11-27 19:25:24.797276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:27:15.265 [2024-11-27 19:25:24.797282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.810688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.810718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:15.265 [2024-11-27 19:25:24.810733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.372 ms 00:27:15.265 [2024-11-27 19:25:24.810740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.810857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.810867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:15.265 [2024-11-27 19:25:24.810876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:15.265 [2024-11-27 19:25:24.810882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.829543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.829570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:15.265 [2024-11-27 19:25:24.829581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.643 ms 00:27:15.265 [2024-11-27 19:25:24.829586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.847199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.847225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:15.265 [2024-11-27 19:25:24.847235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.581 ms 00:27:15.265 [2024-11-27 19:25:24.847240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.864938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.864963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:15.265 [2024-11-27 19:25:24.864973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.665 ms 00:27:15.265 [2024-11-27 19:25:24.864978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.882032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.265 [2024-11-27 19:25:24.882058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:15.265 [2024-11-27 19:25:24.882068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.992 ms 00:27:15.265 [2024-11-27 19:25:24.882074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.265 [2024-11-27 19:25:24.882103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:15.265 [2024-11-27 19:25:24.882116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:15.265 [2024-11-27 19:25:24.882348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:15.266 [2024-11-27 19:25:24.882848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:15.266 [2024-11-27 19:25:24.882855] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ad8a3954-0671-4961-b6b8-ebecf9396cce 00:27:15.266 [2024-11-27 19:25:24.882861] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:15.266 [2024-11-27 19:25:24.882869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:15.266 [2024-11-27 19:25:24.882876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:15.266 [2024-11-27 19:25:24.882883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:15.266 [2024-11-27 19:25:24.882888] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:15.266 [2024-11-27 19:25:24.882896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:15.266 [2024-11-27 19:25:24.882902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:15.266 [2024-11-27 19:25:24.882908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:15.266 [2024-11-27 19:25:24.882913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:15.266 [2024-11-27 19:25:24.882920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.266 [2024-11-27 19:25:24.882925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:15.266 [2024-11-27 19:25:24.882933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:27:15.266 [2024-11-27 19:25:24.882938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.266 [2024-11-27 19:25:24.892979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.266 [2024-11-27 19:25:24.893003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:15.266 [2024-11-27 19:25:24.893013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.000 ms 00:27:15.266 [2024-11-27 19:25:24.893019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.266 [2024-11-27 19:25:24.893325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.266 [2024-11-27 19:25:24.893333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:15.266 [2024-11-27 19:25:24.893342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:27:15.266 [2024-11-27 19:25:24.893348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:24.928001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:24.928028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.528 [2024-11-27 19:25:24.928038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:24.928044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:24.928094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:24.928101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.528 [2024-11-27 19:25:24.928109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:24.928116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:24.928188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:24.928199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.528 [2024-11-27 19:25:24.928206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:24.928213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:24.928230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:24.928237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.528 [2024-11-27 19:25:24.928245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:24.928251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:24.990630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:24.990664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.528 [2024-11-27 19:25:24.990676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:24.990682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.528 [2024-11-27 19:25:25.042252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.528 [2024-11-27 19:25:25.042397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.528 [2024-11-27 19:25:25.042462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.528 [2024-11-27 19:25:25.042567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:15.528 [2024-11-27 19:25:25.042621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.528 [2024-11-27 19:25:25.042679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.528 [2024-11-27 19:25:25.042738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.528 [2024-11-27 19:25:25.042747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.528 [2024-11-27 19:25:25.042754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.528 [2024-11-27 19:25:25.042876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.978 ms, result 0 00:27:15.528 true 00:27:15.528 19:25:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80336 00:27:15.528 19:25:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80336 00:27:15.528 19:25:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:15.528 [2024-11-27 19:25:25.117214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:15.528 [2024-11-27 19:25:25.117303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81019 ] 00:27:15.788 [2024-11-27 19:25:25.264935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.788 [2024-11-27 19:25:25.362026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.163  [2024-11-27T19:25:27.736Z] Copying: 251/1024 [MB] (251 MBps) [2024-11-27T19:25:28.682Z] Copying: 499/1024 [MB] (248 MBps) [2024-11-27T19:25:29.626Z] Copying: 695/1024 [MB] (195 MBps) [2024-11-27T19:25:29.886Z] Copying: 950/1024 [MB] (255 MBps) [2024-11-27T19:25:30.455Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:27:20.820 00:27:20.820 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80336 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:20.820 19:25:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:21.080 [2024-11-27 19:25:30.481906] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:21.080 [2024-11-27 19:25:30.482218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81075 ] 00:27:21.080 [2024-11-27 19:25:30.636420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.341 [2024-11-27 19:25:30.714699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.341 [2024-11-27 19:25:30.923927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.341 [2024-11-27 19:25:30.923980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.603 [2024-11-27 19:25:30.986489] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:21.603 [2024-11-27 19:25:30.986761] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:21.603 [2024-11-27 19:25:30.986979] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:21.603 [2024-11-27 19:25:31.225176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.603 [2024-11-27 19:25:31.225297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.603 [2024-11-27 19:25:31.225312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:21.603 [2024-11-27 19:25:31.225321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.603 [2024-11-27 19:25:31.225358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.603 [2024-11-27 19:25:31.225366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.603 [2024-11-27 19:25:31.225372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:21.603 [2024-11-27 19:25:31.225377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.603 [2024-11-27 19:25:31.225397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.603 [2024-11-27 19:25:31.225934] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.603 [2024-11-27 19:25:31.225950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.603 [2024-11-27 19:25:31.225956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.603 [2024-11-27 19:25:31.225962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:21.603 [2024-11-27 19:25:31.225967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.603 [2024-11-27 19:25:31.226889] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:21.603 [2024-11-27 19:25:31.236636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.603 [2024-11-27 19:25:31.236745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:21.603 [2024-11-27 19:25:31.236759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.748 ms 00:27:21.603 [2024-11-27 19:25:31.236765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.603 [2024-11-27 19:25:31.236802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.603 [2024-11-27 19:25:31.236809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:21.603 [2024-11-27 19:25:31.236815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:21.603 [2024-11-27 19:25:31.236820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.241149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.241171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.866 [2024-11-27 19:25:31.241178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:27:21.866 [2024-11-27 19:25:31.241184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.241236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.241243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.866 [2024-11-27 19:25:31.241249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:21.866 [2024-11-27 19:25:31.241256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.241286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.241293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.866 [2024-11-27 19:25:31.241299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.866 [2024-11-27 19:25:31.241304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.241317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:21.866 [2024-11-27 19:25:31.243906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.244009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.866 [2024-11-27 19:25:31.244021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:27:21.866 [2024-11-27 19:25:31.244027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.244054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.244061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.866 [2024-11-27 19:25:31.244067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:21.866 [2024-11-27 19:25:31.244075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.244088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:21.866 [2024-11-27 19:25:31.244103] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:21.866 [2024-11-27 19:25:31.244137] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:21.866 [2024-11-27 19:25:31.244148] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:21.866 [2024-11-27 19:25:31.244227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:21.866 [2024-11-27 19:25:31.244235] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:21.866 [2024-11-27 19:25:31.244245] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:21.866 [2024-11-27 19:25:31.244253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:21.866 [2024-11-27 19:25:31.244260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:21.866 [2024-11-27 19:25:31.244266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:21.866 [2024-11-27 19:25:31.244271] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:21.866 [2024-11-27 19:25:31.244277] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:21.866 [2024-11-27 19:25:31.244282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:21.866 [2024-11-27 19:25:31.244288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.244294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:21.866 [2024-11-27 19:25:31.244300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:27:21.866 [2024-11-27 19:25:31.244305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.244369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.866 [2024-11-27 19:25:31.244376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:21.866 [2024-11-27 19:25:31.244382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:21.866 [2024-11-27 19:25:31.244387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.866 [2024-11-27 19:25:31.244461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:21.866 [2024-11-27 19:25:31.244469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:21.866 [2024-11-27 19:25:31.244476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.866 [2024-11-27 19:25:31.244482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.866 [2024-11-27 19:25:31.244488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:21.866 [2024-11-27 19:25:31.244494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:21.866 [2024-11-27 19:25:31.244499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:21.866 [2024-11-27 19:25:31.244504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:21.866 [2024-11-27 19:25:31.244510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:21.866 [2024-11-27 19:25:31.244520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.866 [2024-11-27 19:25:31.244525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:21.866 [2024-11-27 19:25:31.244531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:21.866 [2024-11-27 19:25:31.244537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.867 [2024-11-27 19:25:31.244542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:21.867 [2024-11-27 19:25:31.244547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:21.867 [2024-11-27 19:25:31.244552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:21.867 [2024-11-27 19:25:31.244562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:21.867 [2024-11-27 19:25:31.244576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:21.867 [2024-11-27 19:25:31.244591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:21.867 [2024-11-27 19:25:31.244606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:21.867 [2024-11-27 19:25:31.244620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:21.867 [2024-11-27 19:25:31.244634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.867 [2024-11-27 19:25:31.244644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:21.867 [2024-11-27 19:25:31.244649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:21.867 [2024-11-27 19:25:31.244654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.867 [2024-11-27 19:25:31.244659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:21.867 [2024-11-27 19:25:31.244664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:21.867 [2024-11-27 19:25:31.244668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:21.867 [2024-11-27 19:25:31.244678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:21.867 [2024-11-27 19:25:31.244687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244693] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:21.867 [2024-11-27 19:25:31.244700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:21.867 [2024-11-27 19:25:31.244705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.867 [2024-11-27 19:25:31.244716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:21.867 [2024-11-27 19:25:31.244721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:21.867 [2024-11-27 19:25:31.244726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:21.867 [2024-11-27 19:25:31.244731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:21.867 [2024-11-27 19:25:31.244737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:21.867 [2024-11-27 19:25:31.244741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:21.867 [2024-11-27 19:25:31.244747] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:21.867 [2024-11-27 19:25:31.244754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:21.867 [2024-11-27 19:25:31.244765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:21.867 [2024-11-27 19:25:31.244771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:21.867 [2024-11-27 19:25:31.244778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:21.867 [2024-11-27 19:25:31.244783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:21.867 [2024-11-27 19:25:31.244788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:21.867 [2024-11-27 19:25:31.244793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:21.867 [2024-11-27 19:25:31.244799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:21.867 [2024-11-27 19:25:31.244804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:21.867 [2024-11-27 19:25:31.244809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:21.867 [2024-11-27 19:25:31.244836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:21.867 [2024-11-27 19:25:31.244842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.867 [2024-11-27 19:25:31.244853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:21.867 [2024-11-27 19:25:31.244859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:21.867 [2024-11-27 19:25:31.244866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:21.867 [2024-11-27 19:25:31.244872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.867 [2024-11-27 19:25:31.244878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:21.867 [2024-11-27 19:25:31.244884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:27:21.867 [2024-11-27 19:25:31.244891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.867 [2024-11-27 19:25:31.265637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.867 [2024-11-27 19:25:31.265662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:21.867 [2024-11-27 19:25:31.265669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.715 ms 00:27:21.867 [2024-11-27 19:25:31.265677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.265736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.265743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:21.868 [2024-11-27 19:25:31.265749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:21.868 [2024-11-27 19:25:31.265754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.301670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.301702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.868 [2024-11-27 19:25:31.301712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.877 ms 00:27:21.868 [2024-11-27 19:25:31.301718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.301744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.301750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.868 [2024-11-27 19:25:31.301757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:21.868 [2024-11-27 19:25:31.301763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.302086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.302100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.868 [2024-11-27 19:25:31.302111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:27:21.868 [2024-11-27 19:25:31.302118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.302227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.302235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.868 [2024-11-27 19:25:31.302242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:21.868 [2024-11-27 19:25:31.302248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.312936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.313037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.868 [2024-11-27 19:25:31.313077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.671 ms 00:27:21.868 [2024-11-27 19:25:31.313095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.326631] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:21.868 [2024-11-27 19:25:31.326741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:21.868 [2024-11-27 19:25:31.326787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.326803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:21.868 [2024-11-27 19:25:31.326818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.593 ms 00:27:21.868 [2024-11-27 19:25:31.326833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.345704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.345808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:21.868 [2024-11-27 19:25:31.345851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.834 ms 00:27:21.868 [2024-11-27 19:25:31.345869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.354780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.354869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:21.868 [2024-11-27 19:25:31.354908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.847 ms 00:27:21.868 [2024-11-27 19:25:31.354925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.363664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.363749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:21.868 [2024-11-27 19:25:31.363788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:27:21.868 [2024-11-27 19:25:31.363805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.364306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.364380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.868 [2024-11-27 19:25:31.364419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:27:21.868 [2024-11-27 19:25:31.364436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.408710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.408861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:21.868 [2024-11-27 19:25:31.408902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.249 ms 00:27:21.868 [2024-11-27 19:25:31.408920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.416943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.868 [2024-11-27 19:25:31.419101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.419207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.868 [2024-11-27 19:25:31.419255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.141 ms 00:27:21.868 [2024-11-27 19:25:31.419272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.419350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.419382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:21.868 [2024-11-27 19:25:31.419399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:21.868 [2024-11-27 19:25:31.419448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.419520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.419596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.868 [2024-11-27 19:25:31.419614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:21.868 [2024-11-27 19:25:31.419633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.419662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.419679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.868 [2024-11-27 19:25:31.419724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.868 [2024-11-27 19:25:31.419742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.419777] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:21.868 [2024-11-27 19:25:31.419796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.419811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:21.868 [2024-11-27 19:25:31.419829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:21.868 [2024-11-27 19:25:31.419882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.437511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.437605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.868 [2024-11-27 19:25:31.437618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.603 ms 00:27:21.868 [2024-11-27 19:25:31.437625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.868 [2024-11-27 19:25:31.437679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.868 [2024-11-27 19:25:31.437687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.868 [2024-11-27 19:25:31.437693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:21.868 [2024-11-27 19:25:31.437701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.869 [2024-11-27 19:25:31.438459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 212.912 ms, result 0 00:27:23.257  [2024-11-27T19:25:33.464Z] Copying: 38/1024 [MB] (38 MBps) [2024-11-27T19:25:34.853Z] Copying: 49/1024 [MB] (11 MBps) [2024-11-27T19:25:35.797Z] Copying: 69/1024 [MB] (19 MBps) [2024-11-27T19:25:36.743Z] Copying: 88/1024 [MB] (18 MBps) [2024-11-27T19:25:37.688Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-27T19:25:38.634Z] Copying: 113/1024 [MB] (15 MBps) [2024-11-27T19:25:39.578Z] Copying: 132/1024 [MB] (18 MBps) [2024-11-27T19:25:40.523Z] Copying: 145448/1048576 [kB] (10164 kBps) [2024-11-27T19:25:41.471Z] Copying: 154/1024 [MB] (12 MBps) [2024-11-27T19:25:42.861Z] Copying: 167/1024 [MB] (12 MBps) [2024-11-27T19:25:43.804Z] Copying: 181384/1048576 [kB] (10144 kBps) [2024-11-27T19:25:44.747Z] Copying: 191484/1048576 [kB] (10100 kBps) [2024-11-27T19:25:45.690Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-27T19:25:46.635Z] Copying: 217/1024 [MB] (19 MBps) [2024-11-27T19:25:47.580Z] Copying: 233/1024 [MB] (15 MBps) [2024-11-27T19:25:48.526Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-27T19:25:49.586Z] Copying: 265/1024 [MB] (15 MBps) [2024-11-27T19:25:50.531Z] Copying: 290/1024 [MB] (25 MBps) [2024-11-27T19:25:51.475Z] Copying: 303/1024 [MB] (12 MBps) [2024-11-27T19:25:52.865Z] Copying: 324/1024 [MB] (20 MBps) [2024-11-27T19:25:53.809Z] Copying: 336/1024 [MB] (12 MBps) [2024-11-27T19:25:54.753Z] Copying: 361/1024 [MB] (24 MBps) [2024-11-27T19:25:55.698Z] Copying: 377/1024 [MB] (16 MBps) [2024-11-27T19:25:56.641Z] Copying: 394/1024 [MB] (16 MBps) [2024-11-27T19:25:57.584Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-27T19:25:58.529Z] Copying: 429/1024 [MB] (17 MBps) [2024-11-27T19:25:59.471Z] Copying: 444/1024 [MB] (15 MBps) [2024-11-27T19:26:00.860Z] Copying: 460/1024 [MB] (15 MBps) [2024-11-27T19:26:01.805Z] Copying: 470/1024 [MB] (10 MBps) [2024-11-27T19:26:02.775Z] Copying: 484/1024 [MB] (13 MBps) [2024-11-27T19:26:03.717Z] Copying: 500/1024 [MB] (15 MBps) [2024-11-27T19:26:04.661Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-27T19:26:05.605Z] Copying: 530/1024 [MB] (11 MBps) [2024-11-27T19:26:06.549Z] Copying: 558/1024 [MB] (28 MBps) [2024-11-27T19:26:07.491Z] Copying: 578/1024 [MB] (19 MBps) [2024-11-27T19:26:08.874Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-27T19:26:09.815Z] Copying: 608/1024 [MB] (13 MBps) [2024-11-27T19:26:10.759Z] Copying: 624/1024 [MB] (16 MBps) [2024-11-27T19:26:11.702Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-27T19:26:12.647Z] Copying: 659/1024 [MB] (15 MBps) [2024-11-27T19:26:13.592Z] Copying: 691/1024 [MB] (31 MBps) [2024-11-27T19:26:14.537Z] Copying: 710/1024 [MB] (19 MBps) [2024-11-27T19:26:15.480Z] Copying: 728/1024 [MB] (17 MBps) [2024-11-27T19:26:16.870Z] Copying: 749/1024 [MB] (20 MBps) [2024-11-27T19:26:17.815Z] Copying: 763/1024 [MB] (14 MBps) [2024-11-27T19:26:18.761Z] Copying: 789/1024 [MB] (25 MBps) [2024-11-27T19:26:19.705Z] Copying: 805/1024 [MB] (16 MBps) [2024-11-27T19:26:20.648Z] Copying: 821/1024 [MB] (16 MBps) [2024-11-27T19:26:21.735Z] Copying: 837/1024 [MB] (15 MBps) [2024-11-27T19:26:22.679Z] Copying: 860/1024 [MB] (23 MBps) [2024-11-27T19:26:23.623Z] Copying: 880/1024 [MB] (19 MBps) [2024-11-27T19:26:24.566Z] Copying: 899/1024 [MB] (19 MBps) [2024-11-27T19:26:25.508Z] Copying: 917/1024 [MB] (17 MBps) [2024-11-27T19:26:26.891Z] Copying: 933/1024 [MB] (16 MBps) [2024-11-27T19:26:27.463Z] Copying: 952/1024 [MB] (18 MBps) [2024-11-27T19:26:28.849Z] Copying: 977/1024 [MB] (25 MBps) [2024-11-27T19:26:29.792Z] Copying: 1010/1024 [MB] (32 MBps) [2024-11-27T19:26:30.053Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-27T19:26:30.053Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 19:26:30.026799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.418 [2024-11-27 19:26:30.027161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:20.418 [2024-11-27 19:26:30.027195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:20.418 [2024-11-27 19:26:30.027205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.418 [2024-11-27 19:26:30.030055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:20.418 [2024-11-27 19:26:30.034202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.418 [2024-11-27 19:26:30.034268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:20.418 [2024-11-27 19:26:30.034286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:28:20.418 [2024-11-27 19:26:30.034296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.418 [2024-11-27 19:26:30.047449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.419 [2024-11-27 19:26:30.047504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:20.419 [2024-11-27 19:26:30.047518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.883 ms 00:28:20.419 [2024-11-27 19:26:30.047528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.679 [2024-11-27 19:26:30.072378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.679 [2024-11-27 19:26:30.072434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:20.679 [2024-11-27 19:26:30.072448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.828 ms 00:28:20.679 [2024-11-27 19:26:30.072456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.679 [2024-11-27 19:26:30.078723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.679 [2024-11-27 19:26:30.078770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:20.679 [2024-11-27 19:26:30.078783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:28:20.679 [2024-11-27 19:26:30.078791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.679 [2024-11-27 19:26:30.106022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.679 [2024-11-27 19:26:30.106266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:20.679 [2024-11-27 19:26:30.106291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.180 ms 00:28:20.679 [2024-11-27 19:26:30.106300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.679 [2024-11-27 19:26:30.123106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.679 [2024-11-27 19:26:30.123179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:20.679 [2024-11-27 19:26:30.123194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.762 ms 00:28:20.679 [2024-11-27 19:26:30.123204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.679 [2024-11-27 19:26:30.309292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.679 [2024-11-27 19:26:30.309497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:20.679 [2024-11-27 19:26:30.309520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 186.030 ms 00:28:20.679 [2024-11-27 19:26:30.309528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.941 [2024-11-27 19:26:30.336006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.941 [2024-11-27 19:26:30.336070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:20.941 [2024-11-27 19:26:30.336084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:28:20.941 [2024-11-27 19:26:30.336105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.941 [2024-11-27 19:26:30.362292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.941 [2024-11-27 19:26:30.362341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:20.941 [2024-11-27 19:26:30.362354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.973 ms 00:28:20.941 [2024-11-27 19:26:30.362361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.941 [2024-11-27 19:26:30.387674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.941 [2024-11-27 19:26:30.387865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:20.941 [2024-11-27 19:26:30.387886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.262 ms 00:28:20.941 [2024-11-27 19:26:30.387893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.941 [2024-11-27 19:26:30.413362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.941 [2024-11-27 19:26:30.413412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:20.941 [2024-11-27 19:26:30.413425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.345 ms 00:28:20.941 [2024-11-27 19:26:30.413432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.941 [2024-11-27 19:26:30.413480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:20.941 [2024-11-27 19:26:30.413496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98048 / 261120 wr_cnt: 1 state: open 00:28:20.941 [2024-11-27 19:26:30.413507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:20.941 [2024-11-27 19:26:30.413781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.413994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:20.942 [2024-11-27 19:26:30.414372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:20.942 [2024-11-27 19:26:30.414385] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ad8a3954-0671-4961-b6b8-ebecf9396cce 00:28:20.942 [2024-11-27 19:26:30.414401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98048 00:28:20.942 [2024-11-27 19:26:30.414409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99008 00:28:20.942 [2024-11-27 19:26:30.414417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98048 00:28:20.942 [2024-11-27 19:26:30.414427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:28:20.942 [2024-11-27 19:26:30.414435] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:20.942 [2024-11-27 19:26:30.414444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:20.942 [2024-11-27 19:26:30.414453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:20.942 [2024-11-27 19:26:30.414473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:20.942 [2024-11-27 19:26:30.414480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:20.942 [2024-11-27 19:26:30.414488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.942 [2024-11-27 19:26:30.414495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:20.942 [2024-11-27 19:26:30.414504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:28:20.942 [2024-11-27 19:26:30.414512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.942 [2024-11-27 19:26:30.428053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.942 [2024-11-27 19:26:30.428099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:20.943 [2024-11-27 19:26:30.428112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.517 ms 00:28:20.943 [2024-11-27 19:26:30.428121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.428542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.943 [2024-11-27 19:26:30.428553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:20.943 [2024-11-27 19:26:30.428572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:28:20.943 [2024-11-27 19:26:30.428580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.465396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.943 [2024-11-27 19:26:30.465591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:20.943 [2024-11-27 19:26:30.465613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.943 [2024-11-27 19:26:30.465622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.465693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.943 [2024-11-27 19:26:30.465704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:20.943 [2024-11-27 19:26:30.465721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.943 [2024-11-27 19:26:30.465730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.465823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.943 [2024-11-27 19:26:30.465835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:20.943 [2024-11-27 19:26:30.465844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.943 [2024-11-27 19:26:30.465852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.465868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.943 [2024-11-27 19:26:30.465876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:20.943 [2024-11-27 19:26:30.465884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.943 [2024-11-27 19:26:30.465895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.943 [2024-11-27 19:26:30.551170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.943 [2024-11-27 19:26:30.551229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.943 [2024-11-27 19:26:30.551244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.943 [2024-11-27 19:26:30.551252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.622538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.204 [2024-11-27 19:26:30.622560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.622569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.622644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.204 [2024-11-27 19:26:30.622653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.622661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.622734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.204 [2024-11-27 19:26:30.622743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.622751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.622868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.204 [2024-11-27 19:26:30.622876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.622884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.622927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:21.204 [2024-11-27 19:26:30.622935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.622943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.622987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.623013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.204 [2024-11-27 19:26:30.623022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.623030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.623076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.204 [2024-11-27 19:26:30.623087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.204 [2024-11-27 19:26:30.623096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.204 [2024-11-27 19:26:30.623104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.204 [2024-11-27 19:26:30.623273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.743 ms, result 0 00:28:22.590 00:28:22.590 00:28:22.590 19:26:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:25.135 19:26:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:25.135 [2024-11-27 19:26:34.301895] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:25.135 [2024-11-27 19:26:34.301988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81725 ] 00:28:25.135 [2024-11-27 19:26:34.457293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.135 [2024-11-27 19:26:34.571784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.397 [2024-11-27 19:26:34.870651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.397 [2024-11-27 19:26:34.871052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.397 [2024-11-27 19:26:35.030664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-27 19:26:35.030731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:25.397 [2024-11-27 19:26:35.030747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:25.397 [2024-11-27 19:26:35.030756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-27 19:26:35.030810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-27 19:26:35.030824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:25.397 [2024-11-27 19:26:35.030834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:25.397 [2024-11-27 19:26:35.030842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-27 19:26:35.030864] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:25.659 [2024-11-27 19:26:35.031661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:25.659 [2024-11-27 19:26:35.031689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.031697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:25.659 [2024-11-27 19:26:35.031706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:28:25.659 [2024-11-27 19:26:35.031714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.033427] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:25.659 [2024-11-27 19:26:35.047762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.047813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:25.659 [2024-11-27 19:26:35.047827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.337 ms 00:28:25.659 [2024-11-27 19:26:35.047836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.047925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.047935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:25.659 [2024-11-27 19:26:35.047945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:25.659 [2024-11-27 19:26:35.047953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.056722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.056770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:25.659 [2024-11-27 19:26:35.056782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.686 ms 00:28:25.659 [2024-11-27 19:26:35.056797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.056880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.056889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:25.659 [2024-11-27 19:26:35.056898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:25.659 [2024-11-27 19:26:35.056906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.056951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.056961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:25.659 [2024-11-27 19:26:35.056970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:25.659 [2024-11-27 19:26:35.056978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.057004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:25.659 [2024-11-27 19:26:35.061116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.061179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:25.659 [2024-11-27 19:26:35.061194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:28:25.659 [2024-11-27 19:26:35.061201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.061239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.061248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:25.659 [2024-11-27 19:26:35.061257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:25.659 [2024-11-27 19:26:35.061264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.061320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:25.659 [2024-11-27 19:26:35.061344] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:25.659 [2024-11-27 19:26:35.061384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:25.659 [2024-11-27 19:26:35.061403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:25.659 [2024-11-27 19:26:35.061509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:25.659 [2024-11-27 19:26:35.061520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:25.659 [2024-11-27 19:26:35.061532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:25.659 [2024-11-27 19:26:35.061542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:25.659 [2024-11-27 19:26:35.061552] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:25.659 [2024-11-27 19:26:35.061560] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:25.659 [2024-11-27 19:26:35.061568] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:25.659 [2024-11-27 19:26:35.061578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:25.659 [2024-11-27 19:26:35.061586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:25.659 [2024-11-27 19:26:35.061594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.061602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:25.659 [2024-11-27 19:26:35.061610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:28:25.659 [2024-11-27 19:26:35.061618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.061702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.659 [2024-11-27 19:26:35.061711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:25.659 [2024-11-27 19:26:35.061719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:25.659 [2024-11-27 19:26:35.061726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.659 [2024-11-27 19:26:35.061846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:25.659 [2024-11-27 19:26:35.061857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:25.659 [2024-11-27 19:26:35.061865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.659 [2024-11-27 19:26:35.061873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.659 [2024-11-27 19:26:35.061881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:25.659 [2024-11-27 19:26:35.061888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:25.659 [2024-11-27 19:26:35.061897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:25.659 [2024-11-27 19:26:35.061905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:25.659 [2024-11-27 19:26:35.061912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:25.659 [2024-11-27 19:26:35.061919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.659 [2024-11-27 19:26:35.061926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:25.659 [2024-11-27 19:26:35.061936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:25.659 [2024-11-27 19:26:35.061943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.659 [2024-11-27 19:26:35.061957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:25.659 [2024-11-27 19:26:35.061965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:25.659 [2024-11-27 19:26:35.061972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.659 [2024-11-27 19:26:35.061979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:25.659 [2024-11-27 19:26:35.061986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:25.659 [2024-11-27 19:26:35.061994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.659 [2024-11-27 19:26:35.062001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:25.659 [2024-11-27 19:26:35.062008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:25.659 [2024-11-27 19:26:35.062015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.659 [2024-11-27 19:26:35.062022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:25.659 [2024-11-27 19:26:35.062028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:25.659 [2024-11-27 19:26:35.062035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.659 [2024-11-27 19:26:35.062042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:25.659 [2024-11-27 19:26:35.062050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:25.659 [2024-11-27 19:26:35.062056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.659 [2024-11-27 19:26:35.062063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:25.659 [2024-11-27 19:26:35.062070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:25.660 [2024-11-27 19:26:35.062076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.660 [2024-11-27 19:26:35.062082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:25.660 [2024-11-27 19:26:35.062089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:25.660 [2024-11-27 19:26:35.062096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.660 [2024-11-27 19:26:35.062103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:25.660 [2024-11-27 19:26:35.062109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:25.660 [2024-11-27 19:26:35.062115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.660 [2024-11-27 19:26:35.062148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:25.660 [2024-11-27 19:26:35.062156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:25.660 [2024-11-27 19:26:35.062162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.660 [2024-11-27 19:26:35.062169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:25.660 [2024-11-27 19:26:35.062177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:25.660 [2024-11-27 19:26:35.062185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.660 [2024-11-27 19:26:35.062193] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:25.660 [2024-11-27 19:26:35.062202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:25.660 [2024-11-27 19:26:35.062210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.660 [2024-11-27 19:26:35.062219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.660 [2024-11-27 19:26:35.062227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:25.660 [2024-11-27 19:26:35.062234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:25.660 [2024-11-27 19:26:35.062241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:25.660 [2024-11-27 19:26:35.062249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:25.660 [2024-11-27 19:26:35.062255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:25.660 [2024-11-27 19:26:35.062262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:25.660 [2024-11-27 19:26:35.062271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:25.660 [2024-11-27 19:26:35.062281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:25.660 [2024-11-27 19:26:35.062299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:25.660 [2024-11-27 19:26:35.062307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:25.660 [2024-11-27 19:26:35.062314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:25.660 [2024-11-27 19:26:35.062321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:25.660 [2024-11-27 19:26:35.062329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:25.660 [2024-11-27 19:26:35.062337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:25.660 [2024-11-27 19:26:35.062344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:25.660 [2024-11-27 19:26:35.062351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:25.660 [2024-11-27 19:26:35.062358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:25.660 [2024-11-27 19:26:35.062395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:25.660 [2024-11-27 19:26:35.062403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:25.660 [2024-11-27 19:26:35.062419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:25.660 [2024-11-27 19:26:35.062427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:25.660 [2024-11-27 19:26:35.062435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:25.660 [2024-11-27 19:26:35.062448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.062464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:25.660 [2024-11-27 19:26:35.062473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:28:25.660 [2024-11-27 19:26:35.062480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.095588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.095806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.660 [2024-11-27 19:26:35.095827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.059 ms 00:28:25.660 [2024-11-27 19:26:35.095845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.095942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.095952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:25.660 [2024-11-27 19:26:35.095961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:25.660 [2024-11-27 19:26:35.095969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.145538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.145596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.660 [2024-11-27 19:26:35.145609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.502 ms 00:28:25.660 [2024-11-27 19:26:35.145619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.145670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.145680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.660 [2024-11-27 19:26:35.145693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:25.660 [2024-11-27 19:26:35.145701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.146350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.146395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.660 [2024-11-27 19:26:35.146407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:28:25.660 [2024-11-27 19:26:35.146416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.146580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.146592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.660 [2024-11-27 19:26:35.146608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:25.660 [2024-11-27 19:26:35.146616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.162564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.162612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.660 [2024-11-27 19:26:35.162624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.927 ms 00:28:25.660 [2024-11-27 19:26:35.162638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.177345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:25.660 [2024-11-27 19:26:35.177399] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:25.660 [2024-11-27 19:26:35.177413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.177422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:25.660 [2024-11-27 19:26:35.177431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.660 ms 00:28:25.660 [2024-11-27 19:26:35.177440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.203701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.203754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:25.660 [2024-11-27 19:26:35.203767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.199 ms 00:28:25.660 [2024-11-27 19:26:35.203775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.217154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.217204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:25.660 [2024-11-27 19:26:35.217217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.313 ms 00:28:25.660 [2024-11-27 19:26:35.217224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.230081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.230148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:25.660 [2024-11-27 19:26:35.230161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.806 ms 00:28:25.660 [2024-11-27 19:26:35.230169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.660 [2024-11-27 19:26:35.230839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.660 [2024-11-27 19:26:35.230875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.660 [2024-11-27 19:26:35.230890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:28:25.660 [2024-11-27 19:26:35.230898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.299992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.300061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:25.922 [2024-11-27 19:26:35.300085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.074 ms 00:28:25.922 [2024-11-27 19:26:35.300094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.311540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:25.922 [2024-11-27 19:26:35.314838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.315060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:25.922 [2024-11-27 19:26:35.315082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.654 ms 00:28:25.922 [2024-11-27 19:26:35.315092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.315210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.315223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:25.922 [2024-11-27 19:26:35.315237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:25.922 [2024-11-27 19:26:35.315245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.316960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.317013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:25.922 [2024-11-27 19:26:35.317025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:28:25.922 [2024-11-27 19:26:35.317033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.317066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.317075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.922 [2024-11-27 19:26:35.317085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.922 [2024-11-27 19:26:35.317092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.317156] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:25.922 [2024-11-27 19:26:35.317169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.317178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:25.922 [2024-11-27 19:26:35.317187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:25.922 [2024-11-27 19:26:35.317195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.343489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.343543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.922 [2024-11-27 19:26:35.343563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.275 ms 00:28:25.922 [2024-11-27 19:26:35.343573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.343668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.922 [2024-11-27 19:26:35.343679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.922 [2024-11-27 19:26:35.343689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:25.922 [2024-11-27 19:26:35.343698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.922 [2024-11-27 19:26:35.344992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.835 ms, result 0 00:28:27.310  [2024-11-27T19:26:37.888Z] Copying: 1068/1048576 [kB] (1068 kBps) [2024-11-27T19:26:38.834Z] Copying: 4228/1048576 [kB] (3160 kBps) [2024-11-27T19:26:39.776Z] Copying: 15/1024 [MB] (10 MBps) [2024-11-27T19:26:40.719Z] Copying: 43/1024 [MB] (28 MBps) [2024-11-27T19:26:41.664Z] Copying: 73/1024 [MB] (30 MBps) [2024-11-27T19:26:42.603Z] Copying: 103/1024 [MB] (29 MBps) [2024-11-27T19:26:43.594Z] Copying: 137/1024 [MB] (34 MBps) [2024-11-27T19:26:44.535Z] Copying: 167/1024 [MB] (30 MBps) [2024-11-27T19:26:45.918Z] Copying: 193/1024 [MB] (26 MBps) [2024-11-27T19:26:46.856Z] Copying: 215/1024 [MB] (21 MBps) [2024-11-27T19:26:47.798Z] Copying: 239/1024 [MB] (23 MBps) [2024-11-27T19:26:48.742Z] Copying: 263/1024 [MB] (24 MBps) [2024-11-27T19:26:49.683Z] Copying: 286/1024 [MB] (22 MBps) [2024-11-27T19:26:50.624Z] Copying: 312/1024 [MB] (25 MBps) [2024-11-27T19:26:51.567Z] Copying: 340/1024 [MB] (28 MBps) [2024-11-27T19:26:52.988Z] Copying: 365/1024 [MB] (25 MBps) [2024-11-27T19:26:53.583Z] Copying: 384/1024 [MB] (18 MBps) [2024-11-27T19:26:54.968Z] Copying: 402/1024 [MB] (17 MBps) [2024-11-27T19:26:55.541Z] Copying: 434/1024 [MB] (32 MBps) [2024-11-27T19:26:56.926Z] Copying: 463/1024 [MB] (28 MBps) [2024-11-27T19:26:57.868Z] Copying: 487/1024 [MB] (24 MBps) [2024-11-27T19:26:58.812Z] Copying: 509/1024 [MB] (22 MBps) [2024-11-27T19:26:59.760Z] Copying: 528/1024 [MB] (18 MBps) [2024-11-27T19:27:00.699Z] Copying: 560/1024 [MB] (31 MBps) [2024-11-27T19:27:01.642Z] Copying: 594/1024 [MB] (34 MBps) [2024-11-27T19:27:02.583Z] Copying: 622/1024 [MB] (28 MBps) [2024-11-27T19:27:03.966Z] Copying: 647/1024 [MB] (24 MBps) [2024-11-27T19:27:04.536Z] Copying: 674/1024 [MB] (26 MBps) [2024-11-27T19:27:05.921Z] Copying: 703/1024 [MB] (29 MBps) [2024-11-27T19:27:06.863Z] Copying: 732/1024 [MB] (28 MBps) [2024-11-27T19:27:07.807Z] Copying: 758/1024 [MB] (26 MBps) [2024-11-27T19:27:08.749Z] Copying: 787/1024 [MB] (28 MBps) [2024-11-27T19:27:09.695Z] Copying: 827/1024 [MB] (40 MBps) [2024-11-27T19:27:10.639Z] Copying: 862/1024 [MB] (34 MBps) [2024-11-27T19:27:11.582Z] Copying: 888/1024 [MB] (25 MBps) [2024-11-27T19:27:12.967Z] Copying: 916/1024 [MB] (27 MBps) [2024-11-27T19:27:13.539Z] Copying: 939/1024 [MB] (23 MBps) [2024-11-27T19:27:14.925Z] Copying: 966/1024 [MB] (26 MBps) [2024-11-27T19:27:15.868Z] Copying: 995/1024 [MB] (28 MBps) [2024-11-27T19:27:15.868Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-27 19:27:15.766980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.767482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:06.233 [2024-11-27 19:27:15.767860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:06.233 [2024-11-27 19:27:15.767935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.768041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:06.233 [2024-11-27 19:27:15.773205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.773373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:06.233 [2024-11-27 19:27:15.773449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.872 ms 00:29:06.233 [2024-11-27 19:27:15.773475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.773904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.774159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:06.233 [2024-11-27 19:27:15.774186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:29:06.233 [2024-11-27 19:27:15.774196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.789075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.789158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:06.233 [2024-11-27 19:27:15.789172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.850 ms 00:29:06.233 [2024-11-27 19:27:15.789181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.795507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.795549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:06.233 [2024-11-27 19:27:15.795569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.281 ms 00:29:06.233 [2024-11-27 19:27:15.795577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.822682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.822726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:06.233 [2024-11-27 19:27:15.822739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.060 ms 00:29:06.233 [2024-11-27 19:27:15.822747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.839788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.839837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:06.233 [2024-11-27 19:27:15.839851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.991 ms 00:29:06.233 [2024-11-27 19:27:15.839860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.233 [2024-11-27 19:27:15.844644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.233 [2024-11-27 19:27:15.844693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:06.233 [2024-11-27 19:27:15.844705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:29:06.233 [2024-11-27 19:27:15.844721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.495 [2024-11-27 19:27:15.870865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.495 [2024-11-27 19:27:15.871059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:06.495 [2024-11-27 19:27:15.871080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.127 ms 00:29:06.495 [2024-11-27 19:27:15.871088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.495 [2024-11-27 19:27:15.897173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.495 [2024-11-27 19:27:15.897360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:06.495 [2024-11-27 19:27:15.897383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:29:06.495 [2024-11-27 19:27:15.897391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.495 [2024-11-27 19:27:15.922335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.495 [2024-11-27 19:27:15.922382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:06.495 [2024-11-27 19:27:15.922395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.902 ms 00:29:06.495 [2024-11-27 19:27:15.922402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.495 [2024-11-27 19:27:15.947072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.495 [2024-11-27 19:27:15.947140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:06.495 [2024-11-27 19:27:15.947153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.593 ms 00:29:06.495 [2024-11-27 19:27:15.947160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.495 [2024-11-27 19:27:15.947205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:06.495 [2024-11-27 19:27:15.947222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:06.495 [2024-11-27 19:27:15.947233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:06.495 [2024-11-27 19:27:15.947242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:06.495 [2024-11-27 19:27:15.947716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.947997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.948005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.948013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:06.496 [2024-11-27 19:27:15.948030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:06.496 [2024-11-27 19:27:15.948038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ad8a3954-0671-4961-b6b8-ebecf9396cce 00:29:06.496 [2024-11-27 19:27:15.948046] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:06.496 [2024-11-27 19:27:15.948054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 166592 00:29:06.496 [2024-11-27 19:27:15.948067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 164608 00:29:06.496 [2024-11-27 19:27:15.948076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:29:06.496 [2024-11-27 19:27:15.948084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:06.496 [2024-11-27 19:27:15.948099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:06.496 [2024-11-27 19:27:15.948107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:06.496 [2024-11-27 19:27:15.948114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:06.496 [2024-11-27 19:27:15.948120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:06.496 [2024-11-27 19:27:15.948140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.496 [2024-11-27 19:27:15.948148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:06.496 [2024-11-27 19:27:15.948157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:29:06.496 [2024-11-27 19:27:15.948165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.961473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.496 [2024-11-27 19:27:15.961517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:06.496 [2024-11-27 19:27:15.961529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.288 ms 00:29:06.496 [2024-11-27 19:27:15.961537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.961942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.496 [2024-11-27 19:27:15.961952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:06.496 [2024-11-27 19:27:15.961961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:29:06.496 [2024-11-27 19:27:15.961969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.998336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.496 [2024-11-27 19:27:15.998382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:06.496 [2024-11-27 19:27:15.998395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.496 [2024-11-27 19:27:15.998405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.998465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.496 [2024-11-27 19:27:15.998473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:06.496 [2024-11-27 19:27:15.998482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.496 [2024-11-27 19:27:15.998490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.998588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.496 [2024-11-27 19:27:15.998599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:06.496 [2024-11-27 19:27:15.998608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.496 [2024-11-27 19:27:15.998615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:15.998632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.496 [2024-11-27 19:27:15.998640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:06.496 [2024-11-27 19:27:15.998648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.496 [2024-11-27 19:27:15.998656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.496 [2024-11-27 19:27:16.082004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.496 [2024-11-27 19:27:16.082061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.496 [2024-11-27 19:27:16.082075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.496 [2024-11-27 19:27:16.082084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.757 [2024-11-27 19:27:16.150256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.757 [2024-11-27 19:27:16.150349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.757 [2024-11-27 19:27:16.150434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.757 [2024-11-27 19:27:16.150562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:06.757 [2024-11-27 19:27:16.150621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.757 [2024-11-27 19:27:16.150694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.757 [2024-11-27 19:27:16.150763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.757 [2024-11-27 19:27:16.150771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.757 [2024-11-27 19:27:16.150779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.757 [2024-11-27 19:27:16.150917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.946 ms, result 0 00:29:07.329 00:29:07.329 00:29:07.329 19:27:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.873 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:09.873 19:27:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.873 [2024-11-27 19:27:18.979652] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:09.874 [2024-11-27 19:27:18.979772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82182 ] 00:29:09.874 [2024-11-27 19:27:19.141925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.874 [2024-11-27 19:27:19.255971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.135 [2024-11-27 19:27:19.551310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:10.135 [2024-11-27 19:27:19.551383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:10.135 [2024-11-27 19:27:19.710581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.710796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:10.135 [2024-11-27 19:27:19.710821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:10.135 [2024-11-27 19:27:19.710831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.710899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.710913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:10.135 [2024-11-27 19:27:19.710922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:10.135 [2024-11-27 19:27:19.710931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.710954] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:10.135 [2024-11-27 19:27:19.711691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:10.135 [2024-11-27 19:27:19.711718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.711727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:10.135 [2024-11-27 19:27:19.711736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:29:10.135 [2024-11-27 19:27:19.711744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.713542] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:10.135 [2024-11-27 19:27:19.727407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.727451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:10.135 [2024-11-27 19:27:19.727463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.869 ms 00:29:10.135 [2024-11-27 19:27:19.727472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.727548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.727559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:10.135 [2024-11-27 19:27:19.727568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:10.135 [2024-11-27 19:27:19.727575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.735355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.735397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:10.135 [2024-11-27 19:27:19.735407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.704 ms 00:29:10.135 [2024-11-27 19:27:19.735422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.735500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.735510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:10.135 [2024-11-27 19:27:19.735519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:10.135 [2024-11-27 19:27:19.735526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.735568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.735577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:10.135 [2024-11-27 19:27:19.735586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:10.135 [2024-11-27 19:27:19.735593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.735619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:10.135 [2024-11-27 19:27:19.739458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.739493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:10.135 [2024-11-27 19:27:19.739506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:29:10.135 [2024-11-27 19:27:19.739514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.739547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.135 [2024-11-27 19:27:19.739555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:10.135 [2024-11-27 19:27:19.739564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:10.135 [2024-11-27 19:27:19.739572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.135 [2024-11-27 19:27:19.739621] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:10.135 [2024-11-27 19:27:19.739643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:10.135 [2024-11-27 19:27:19.739685] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:10.135 [2024-11-27 19:27:19.739705] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:10.135 [2024-11-27 19:27:19.739811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:10.135 [2024-11-27 19:27:19.739822] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:10.135 [2024-11-27 19:27:19.739833] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:10.135 [2024-11-27 19:27:19.739844] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:10.136 [2024-11-27 19:27:19.739854] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:10.136 [2024-11-27 19:27:19.739862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:10.136 [2024-11-27 19:27:19.739871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:10.136 [2024-11-27 19:27:19.739882] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:10.136 [2024-11-27 19:27:19.739890] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:10.136 [2024-11-27 19:27:19.739898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.136 [2024-11-27 19:27:19.739907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:10.136 [2024-11-27 19:27:19.739915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:29:10.136 [2024-11-27 19:27:19.739922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.136 [2024-11-27 19:27:19.740005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.136 [2024-11-27 19:27:19.740014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:10.136 [2024-11-27 19:27:19.740022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:10.136 [2024-11-27 19:27:19.740029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.136 [2024-11-27 19:27:19.740149] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:10.136 [2024-11-27 19:27:19.740161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:10.136 [2024-11-27 19:27:19.740170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:10.136 [2024-11-27 19:27:19.740194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:10.136 [2024-11-27 19:27:19.740216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:10.136 [2024-11-27 19:27:19.740231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:10.136 [2024-11-27 19:27:19.740239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:10.136 [2024-11-27 19:27:19.740247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:10.136 [2024-11-27 19:27:19.740260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:10.136 [2024-11-27 19:27:19.740267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:10.136 [2024-11-27 19:27:19.740277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:10.136 [2024-11-27 19:27:19.740291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:10.136 [2024-11-27 19:27:19.740312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:10.136 [2024-11-27 19:27:19.740331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:10.136 [2024-11-27 19:27:19.740351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:10.136 [2024-11-27 19:27:19.740371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:10.136 [2024-11-27 19:27:19.740392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:10.136 [2024-11-27 19:27:19.740405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:10.136 [2024-11-27 19:27:19.740412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:10.136 [2024-11-27 19:27:19.740418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:10.136 [2024-11-27 19:27:19.740424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:10.136 [2024-11-27 19:27:19.740432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:10.136 [2024-11-27 19:27:19.740438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:10.136 [2024-11-27 19:27:19.740451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:10.136 [2024-11-27 19:27:19.740457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:10.136 [2024-11-27 19:27:19.740472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:10.136 [2024-11-27 19:27:19.740480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:10.136 [2024-11-27 19:27:19.740497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:10.136 [2024-11-27 19:27:19.740504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:10.136 [2024-11-27 19:27:19.740512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:10.136 [2024-11-27 19:27:19.740519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:10.136 [2024-11-27 19:27:19.740525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:10.136 [2024-11-27 19:27:19.740532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:10.136 [2024-11-27 19:27:19.740541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:10.136 [2024-11-27 19:27:19.740550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:10.136 [2024-11-27 19:27:19.740568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:10.136 [2024-11-27 19:27:19.740576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:10.136 [2024-11-27 19:27:19.740583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:10.136 [2024-11-27 19:27:19.740590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:10.136 [2024-11-27 19:27:19.740598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:10.136 [2024-11-27 19:27:19.740606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:10.136 [2024-11-27 19:27:19.740615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:10.136 [2024-11-27 19:27:19.740622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:10.136 [2024-11-27 19:27:19.740629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:10.136 [2024-11-27 19:27:19.740665] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:10.136 [2024-11-27 19:27:19.740673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:10.136 [2024-11-27 19:27:19.740688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:10.136 [2024-11-27 19:27:19.740695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:10.136 [2024-11-27 19:27:19.740703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:10.136 [2024-11-27 19:27:19.740710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.136 [2024-11-27 19:27:19.740718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:10.136 [2024-11-27 19:27:19.740726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:29:10.136 [2024-11-27 19:27:19.740733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.772255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.772302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:10.398 [2024-11-27 19:27:19.772315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.476 ms 00:29:10.398 [2024-11-27 19:27:19.772327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.772412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.772420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:10.398 [2024-11-27 19:27:19.772429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:10.398 [2024-11-27 19:27:19.772437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.817145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.817195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:10.398 [2024-11-27 19:27:19.817209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.649 ms 00:29:10.398 [2024-11-27 19:27:19.817217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.817266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.817277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:10.398 [2024-11-27 19:27:19.817290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:10.398 [2024-11-27 19:27:19.817298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.817848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.817870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:10.398 [2024-11-27 19:27:19.817881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:29:10.398 [2024-11-27 19:27:19.817889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.818042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.818052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:10.398 [2024-11-27 19:27:19.818067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:29:10.398 [2024-11-27 19:27:19.818075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.833611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.833656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:10.398 [2024-11-27 19:27:19.833668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.515 ms 00:29:10.398 [2024-11-27 19:27:19.833675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.847726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:10.398 [2024-11-27 19:27:19.847904] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:10.398 [2024-11-27 19:27:19.847924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.847933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:10.398 [2024-11-27 19:27:19.847943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.140 ms 00:29:10.398 [2024-11-27 19:27:19.847950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.875091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.875270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:10.398 [2024-11-27 19:27:19.875292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.095 ms 00:29:10.398 [2024-11-27 19:27:19.875302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.888469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.888514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:10.398 [2024-11-27 19:27:19.888526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.023 ms 00:29:10.398 [2024-11-27 19:27:19.888533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.900991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.901035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:10.398 [2024-11-27 19:27:19.901047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:29:10.398 [2024-11-27 19:27:19.901054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.901710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.901743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:10.398 [2024-11-27 19:27:19.901758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:29:10.398 [2024-11-27 19:27:19.901766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.968659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.968718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:10.398 [2024-11-27 19:27:19.968740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.874 ms 00:29:10.398 [2024-11-27 19:27:19.968748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.979712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:10.398 [2024-11-27 19:27:19.982779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.982821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:10.398 [2024-11-27 19:27:19.982832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:29:10.398 [2024-11-27 19:27:19.982840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.982927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.982938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:10.398 [2024-11-27 19:27:19.982951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:10.398 [2024-11-27 19:27:19.982959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.398 [2024-11-27 19:27:19.983858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.398 [2024-11-27 19:27:19.983892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:10.398 [2024-11-27 19:27:19.983903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:29:10.398 [2024-11-27 19:27:19.983912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.399 [2024-11-27 19:27:19.983940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.399 [2024-11-27 19:27:19.983951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:10.399 [2024-11-27 19:27:19.983959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:10.399 [2024-11-27 19:27:19.983968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.399 [2024-11-27 19:27:19.984014] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:10.399 [2024-11-27 19:27:19.984025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.399 [2024-11-27 19:27:19.984034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:10.399 [2024-11-27 19:27:19.984043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:10.399 [2024-11-27 19:27:19.984051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.399 [2024-11-27 19:27:20.009583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.399 [2024-11-27 19:27:20.009631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:10.399 [2024-11-27 19:27:20.009650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.512 ms 00:29:10.399 [2024-11-27 19:27:20.009661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.399 [2024-11-27 19:27:20.009746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.399 [2024-11-27 19:27:20.009756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:10.399 [2024-11-27 19:27:20.009766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:10.399 [2024-11-27 19:27:20.009775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.399 [2024-11-27 19:27:20.010999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.943 ms, result 0 00:29:11.786  [2024-11-27T19:27:22.366Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-27T19:27:23.309Z] Copying: 39/1024 [MB] (21 MBps) [2024-11-27T19:27:24.254Z] Copying: 55/1024 [MB] (15 MBps) [2024-11-27T19:27:25.283Z] Copying: 71/1024 [MB] (16 MBps) [2024-11-27T19:27:26.230Z] Copying: 92/1024 [MB] (21 MBps) [2024-11-27T19:27:27.616Z] Copying: 113/1024 [MB] (20 MBps) [2024-11-27T19:27:28.559Z] Copying: 133/1024 [MB] (20 MBps) [2024-11-27T19:27:29.503Z] Copying: 155/1024 [MB] (21 MBps) [2024-11-27T19:27:30.445Z] Copying: 175/1024 [MB] (20 MBps) [2024-11-27T19:27:31.389Z] Copying: 197/1024 [MB] (21 MBps) [2024-11-27T19:27:32.334Z] Copying: 213/1024 [MB] (15 MBps) [2024-11-27T19:27:33.278Z] Copying: 229/1024 [MB] (16 MBps) [2024-11-27T19:27:34.223Z] Copying: 247/1024 [MB] (18 MBps) [2024-11-27T19:27:35.610Z] Copying: 263/1024 [MB] (15 MBps) [2024-11-27T19:27:36.554Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-27T19:27:37.514Z] Copying: 294/1024 [MB] (15 MBps) [2024-11-27T19:27:38.458Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-27T19:27:39.407Z] Copying: 325/1024 [MB] (21 MBps) [2024-11-27T19:27:40.350Z] Copying: 341/1024 [MB] (15 MBps) [2024-11-27T19:27:41.293Z] Copying: 358/1024 [MB] (16 MBps) [2024-11-27T19:27:42.238Z] Copying: 373/1024 [MB] (15 MBps) [2024-11-27T19:27:43.626Z] Copying: 389/1024 [MB] (15 MBps) [2024-11-27T19:27:44.200Z] Copying: 403/1024 [MB] (13 MBps) [2024-11-27T19:27:45.588Z] Copying: 423/1024 [MB] (20 MBps) [2024-11-27T19:27:46.533Z] Copying: 443/1024 [MB] (19 MBps) [2024-11-27T19:27:47.479Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-27T19:27:48.423Z] Copying: 469/1024 [MB] (11 MBps) [2024-11-27T19:27:49.371Z] Copying: 491/1024 [MB] (21 MBps) [2024-11-27T19:27:50.314Z] Copying: 514/1024 [MB] (22 MBps) [2024-11-27T19:27:51.260Z] Copying: 530/1024 [MB] (16 MBps) [2024-11-27T19:27:52.203Z] Copying: 547/1024 [MB] (17 MBps) [2024-11-27T19:27:53.590Z] Copying: 561/1024 [MB] (13 MBps) [2024-11-27T19:27:54.531Z] Copying: 579/1024 [MB] (18 MBps) [2024-11-27T19:27:55.477Z] Copying: 599/1024 [MB] (20 MBps) [2024-11-27T19:27:56.420Z] Copying: 612/1024 [MB] (12 MBps) [2024-11-27T19:27:57.421Z] Copying: 626/1024 [MB] (13 MBps) [2024-11-27T19:27:58.354Z] Copying: 639/1024 [MB] (13 MBps) [2024-11-27T19:27:59.284Z] Copying: 652/1024 [MB] (12 MBps) [2024-11-27T19:28:00.216Z] Copying: 668/1024 [MB] (16 MBps) [2024-11-27T19:28:01.599Z] Copying: 687/1024 [MB] (19 MBps) [2024-11-27T19:28:02.539Z] Copying: 708/1024 [MB] (20 MBps) [2024-11-27T19:28:03.483Z] Copying: 724/1024 [MB] (16 MBps) [2024-11-27T19:28:04.425Z] Copying: 739/1024 [MB] (14 MBps) [2024-11-27T19:28:05.369Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-27T19:28:06.312Z] Copying: 769/1024 [MB] (17 MBps) [2024-11-27T19:28:07.253Z] Copying: 787/1024 [MB] (18 MBps) [2024-11-27T19:28:08.195Z] Copying: 801/1024 [MB] (14 MBps) [2024-11-27T19:28:09.574Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-27T19:28:10.515Z] Copying: 825/1024 [MB] (13 MBps) [2024-11-27T19:28:11.458Z] Copying: 840/1024 [MB] (15 MBps) [2024-11-27T19:28:12.401Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-27T19:28:13.343Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-27T19:28:14.283Z] Copying: 881/1024 [MB] (11 MBps) [2024-11-27T19:28:15.225Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-27T19:28:16.602Z] Copying: 906/1024 [MB] (14 MBps) [2024-11-27T19:28:17.543Z] Copying: 920/1024 [MB] (14 MBps) [2024-11-27T19:28:18.483Z] Copying: 931/1024 [MB] (11 MBps) [2024-11-27T19:28:19.418Z] Copying: 944/1024 [MB] (12 MBps) [2024-11-27T19:28:20.353Z] Copying: 955/1024 [MB] (11 MBps) [2024-11-27T19:28:21.286Z] Copying: 970/1024 [MB] (14 MBps) [2024-11-27T19:28:22.219Z] Copying: 984/1024 [MB] (13 MBps) [2024-11-27T19:28:23.602Z] Copying: 998/1024 [MB] (13 MBps) [2024-11-27T19:28:24.539Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-27T19:28:24.539Z] Copying: 1022/1024 [MB] (12 MBps) [2024-11-27T19:28:24.539Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 19:28:24.371242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.904 [2024-11-27 19:28:24.371309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:14.904 [2024-11-27 19:28:24.371327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:14.904 [2024-11-27 19:28:24.371337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.904 [2024-11-27 19:28:24.371363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:14.904 [2024-11-27 19:28:24.375272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.904 [2024-11-27 19:28:24.375316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:14.904 [2024-11-27 19:28:24.375329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:30:14.904 [2024-11-27 19:28:24.375339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.904 [2024-11-27 19:28:24.375601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.904 [2024-11-27 19:28:24.375613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:14.904 [2024-11-27 19:28:24.375623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:30:14.904 [2024-11-27 19:28:24.375633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.904 [2024-11-27 19:28:24.379794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.904 [2024-11-27 19:28:24.379816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:14.904 [2024-11-27 19:28:24.379826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:30:14.904 [2024-11-27 19:28:24.379839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.385967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.385993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:14.905 [2024-11-27 19:28:24.386004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:30:14.905 [2024-11-27 19:28:24.386011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.406291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.406318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:14.905 [2024-11-27 19:28:24.406327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.226 ms 00:30:14.905 [2024-11-27 19:28:24.406334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.424835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.424886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:14.905 [2024-11-27 19:28:24.424900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.469 ms 00:30:14.905 [2024-11-27 19:28:24.424908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.428951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.429081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:14.905 [2024-11-27 19:28:24.429098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.991 ms 00:30:14.905 [2024-11-27 19:28:24.429106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.453626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.453658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:14.905 [2024-11-27 19:28:24.453669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:30:14.905 [2024-11-27 19:28:24.453676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.477325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.477355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:14.905 [2024-11-27 19:28:24.477365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.616 ms 00:30:14.905 [2024-11-27 19:28:24.477372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.500431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.500463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:14.905 [2024-11-27 19:28:24.500474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.028 ms 00:30:14.905 [2024-11-27 19:28:24.500480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.523484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.905 [2024-11-27 19:28:24.523620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:14.905 [2024-11-27 19:28:24.523636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.927 ms 00:30:14.905 [2024-11-27 19:28:24.523644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.905 [2024-11-27 19:28:24.523674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:14.905 [2024-11-27 19:28:24.523693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:14.905 [2024-11-27 19:28:24.523706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:14.905 [2024-11-27 19:28:24.523715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.523997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.524004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.524011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:14.905 [2024-11-27 19:28:24.524018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:14.906 [2024-11-27 19:28:24.524479] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:14.906 [2024-11-27 19:28:24.524486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ad8a3954-0671-4961-b6b8-ebecf9396cce 00:30:14.906 [2024-11-27 19:28:24.524494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:14.906 [2024-11-27 19:28:24.524501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:14.906 [2024-11-27 19:28:24.524508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:14.906 [2024-11-27 19:28:24.524516] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:14.906 [2024-11-27 19:28:24.524528] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:14.906 [2024-11-27 19:28:24.524536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:14.906 [2024-11-27 19:28:24.524543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:14.906 [2024-11-27 19:28:24.524549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:14.906 [2024-11-27 19:28:24.524556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:14.906 [2024-11-27 19:28:24.524563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.906 [2024-11-27 19:28:24.524570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:14.906 [2024-11-27 19:28:24.524578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:30:14.906 [2024-11-27 19:28:24.524587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.907 [2024-11-27 19:28:24.537052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.907 [2024-11-27 19:28:24.537085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:14.907 [2024-11-27 19:28:24.537096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:30:14.907 [2024-11-27 19:28:24.537103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.907 [2024-11-27 19:28:24.537485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.907 [2024-11-27 19:28:24.537501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:14.907 [2024-11-27 19:28:24.537510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:30:14.907 [2024-11-27 19:28:24.537517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.572084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.572120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.167 [2024-11-27 19:28:24.572260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.572268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.572326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.572339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.167 [2024-11-27 19:28:24.572347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.572355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.572412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.572422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.167 [2024-11-27 19:28:24.572430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.572438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.572453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.572461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.167 [2024-11-27 19:28:24.572472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.572479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.652232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.652289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.167 [2024-11-27 19:28:24.652302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.652311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.167 [2024-11-27 19:28:24.716302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.716309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:15.167 [2024-11-27 19:28:24.716390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.716398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:15.167 [2024-11-27 19:28:24.716451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.716462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:15.167 [2024-11-27 19:28:24.716565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.716572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:15.167 [2024-11-27 19:28:24.716617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.167 [2024-11-27 19:28:24.716624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.167 [2024-11-27 19:28:24.716663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.167 [2024-11-27 19:28:24.716672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:15.168 [2024-11-27 19:28:24.716679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.168 [2024-11-27 19:28:24.716687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.168 [2024-11-27 19:28:24.716729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.168 [2024-11-27 19:28:24.716738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:15.168 [2024-11-27 19:28:24.716746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.168 [2024-11-27 19:28:24.716756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.168 [2024-11-27 19:28:24.716872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.606 ms, result 0 00:30:16.106 00:30:16.106 00:30:16.106 19:28:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:18.016 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:18.016 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:18.016 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:18.016 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:18.016 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:18.276 Process with pid 80336 is not found 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80336 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80336 ']' 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80336 00:30:18.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80336) - No such process 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80336 is not found' 00:30:18.276 19:28:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:18.551 Remove shared memory files 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:18.551 ************************************ 00:30:18.551 END TEST ftl_dirty_shutdown 00:30:18.551 ************************************ 00:30:18.551 00:30:18.551 real 4m2.681s 00:30:18.551 user 4m21.871s 00:30:18.551 sys 0m26.571s 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.551 19:28:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:18.853 19:28:28 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:18.853 19:28:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:18.853 19:28:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.853 19:28:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:18.853 ************************************ 00:30:18.853 START TEST ftl_upgrade_shutdown 00:30:18.853 ************************************ 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:18.853 * Looking for test storage... 00:30:18.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.853 --rc genhtml_branch_coverage=1 00:30:18.853 --rc genhtml_function_coverage=1 00:30:18.853 --rc genhtml_legend=1 00:30:18.853 --rc geninfo_all_blocks=1 00:30:18.853 --rc geninfo_unexecuted_blocks=1 00:30:18.853 00:30:18.853 ' 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.853 --rc genhtml_branch_coverage=1 00:30:18.853 --rc genhtml_function_coverage=1 00:30:18.853 --rc genhtml_legend=1 00:30:18.853 --rc geninfo_all_blocks=1 00:30:18.853 --rc geninfo_unexecuted_blocks=1 00:30:18.853 00:30:18.853 ' 00:30:18.853 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.854 --rc genhtml_branch_coverage=1 00:30:18.854 --rc genhtml_function_coverage=1 00:30:18.854 --rc genhtml_legend=1 00:30:18.854 --rc geninfo_all_blocks=1 00:30:18.854 --rc geninfo_unexecuted_blocks=1 00:30:18.854 00:30:18.854 ' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.854 --rc genhtml_branch_coverage=1 00:30:18.854 --rc genhtml_function_coverage=1 00:30:18.854 --rc genhtml_legend=1 00:30:18.854 --rc geninfo_all_blocks=1 00:30:18.854 --rc geninfo_unexecuted_blocks=1 00:30:18.854 00:30:18.854 ' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82957 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82957 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82957 ']' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.854 19:28:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:19.122 [2024-11-27 19:28:28.499027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:19.122 [2024-11-27 19:28:28.499455] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82957 ] 00:30:19.122 [2024-11-27 19:28:28.665360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.383 [2024-11-27 19:28:28.762957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:19.951 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:20.210 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:20.471 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:20.471 { 00:30:20.471 "name": "basen1", 00:30:20.471 "aliases": [ 00:30:20.471 "4efc0091-7b2a-481c-9dec-20200299281b" 00:30:20.471 ], 00:30:20.471 "product_name": "NVMe disk", 00:30:20.471 "block_size": 4096, 00:30:20.471 "num_blocks": 1310720, 00:30:20.471 "uuid": "4efc0091-7b2a-481c-9dec-20200299281b", 00:30:20.471 "numa_id": -1, 00:30:20.471 "assigned_rate_limits": { 00:30:20.471 "rw_ios_per_sec": 0, 00:30:20.471 "rw_mbytes_per_sec": 0, 00:30:20.471 "r_mbytes_per_sec": 0, 00:30:20.471 "w_mbytes_per_sec": 0 00:30:20.471 }, 00:30:20.471 "claimed": true, 00:30:20.471 "claim_type": "read_many_write_one", 00:30:20.471 "zoned": false, 00:30:20.471 "supported_io_types": { 00:30:20.471 "read": true, 00:30:20.471 "write": true, 00:30:20.471 "unmap": true, 00:30:20.471 "flush": true, 00:30:20.471 "reset": true, 00:30:20.471 "nvme_admin": true, 00:30:20.471 "nvme_io": true, 00:30:20.471 "nvme_io_md": false, 00:30:20.471 "write_zeroes": true, 00:30:20.471 "zcopy": false, 00:30:20.471 "get_zone_info": false, 00:30:20.471 "zone_management": false, 00:30:20.471 "zone_append": false, 00:30:20.471 "compare": true, 00:30:20.471 "compare_and_write": false, 00:30:20.471 "abort": true, 00:30:20.471 "seek_hole": false, 00:30:20.471 "seek_data": false, 00:30:20.471 "copy": true, 00:30:20.471 "nvme_iov_md": false 00:30:20.471 }, 00:30:20.471 "driver_specific": { 00:30:20.471 "nvme": [ 00:30:20.471 { 00:30:20.471 "pci_address": "0000:00:11.0", 00:30:20.471 "trid": { 00:30:20.471 "trtype": "PCIe", 00:30:20.471 "traddr": "0000:00:11.0" 00:30:20.472 }, 00:30:20.472 "ctrlr_data": { 00:30:20.472 "cntlid": 0, 00:30:20.472 "vendor_id": "0x1b36", 00:30:20.472 "model_number": "QEMU NVMe Ctrl", 00:30:20.472 "serial_number": "12341", 00:30:20.472 "firmware_revision": "8.0.0", 00:30:20.472 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:20.472 "oacs": { 00:30:20.472 "security": 0, 00:30:20.472 "format": 1, 00:30:20.472 "firmware": 0, 00:30:20.472 "ns_manage": 1 00:30:20.472 }, 00:30:20.472 "multi_ctrlr": false, 00:30:20.472 "ana_reporting": false 00:30:20.472 }, 00:30:20.472 "vs": { 00:30:20.472 "nvme_version": "1.4" 00:30:20.472 }, 00:30:20.472 "ns_data": { 00:30:20.472 "id": 1, 00:30:20.472 "can_share": false 00:30:20.472 } 00:30:20.472 } 00:30:20.472 ], 00:30:20.472 "mp_policy": "active_passive" 00:30:20.472 } 00:30:20.472 } 00:30:20.472 ]' 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:20.472 19:28:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:20.731 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e08b2492-637e-4844-8bbd-696cb073d8a0 00:30:20.731 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:20.731 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e08b2492-637e-4844-8bbd-696cb073d8a0 00:30:20.992 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7dd8a1b5-96d2-469c-b831-d1898f82e58f 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7dd8a1b5-96d2-469c-b831-d1898f82e58f 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 ]] 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 5120 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:21.254 19:28:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 00:30:21.515 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:21.515 { 00:30:21.515 "name": "d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1", 00:30:21.515 "aliases": [ 00:30:21.515 "lvs/basen1p0" 00:30:21.515 ], 00:30:21.515 "product_name": "Logical Volume", 00:30:21.515 "block_size": 4096, 00:30:21.515 "num_blocks": 5242880, 00:30:21.515 "uuid": "d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1", 00:30:21.515 "assigned_rate_limits": { 00:30:21.515 "rw_ios_per_sec": 0, 00:30:21.515 "rw_mbytes_per_sec": 0, 00:30:21.515 "r_mbytes_per_sec": 0, 00:30:21.515 "w_mbytes_per_sec": 0 00:30:21.515 }, 00:30:21.515 "claimed": false, 00:30:21.515 "zoned": false, 00:30:21.515 "supported_io_types": { 00:30:21.515 "read": true, 00:30:21.515 "write": true, 00:30:21.515 "unmap": true, 00:30:21.515 "flush": false, 00:30:21.515 "reset": true, 00:30:21.515 "nvme_admin": false, 00:30:21.515 "nvme_io": false, 00:30:21.515 "nvme_io_md": false, 00:30:21.515 "write_zeroes": true, 00:30:21.515 "zcopy": false, 00:30:21.515 "get_zone_info": false, 00:30:21.515 "zone_management": false, 00:30:21.515 "zone_append": false, 00:30:21.515 "compare": false, 00:30:21.515 "compare_and_write": false, 00:30:21.515 "abort": false, 00:30:21.515 "seek_hole": true, 00:30:21.515 "seek_data": true, 00:30:21.515 "copy": false, 00:30:21.515 "nvme_iov_md": false 00:30:21.515 }, 00:30:21.515 "driver_specific": { 00:30:21.515 "lvol": { 00:30:21.515 "lvol_store_uuid": "7dd8a1b5-96d2-469c-b831-d1898f82e58f", 00:30:21.515 "base_bdev": "basen1", 00:30:21.515 "thin_provision": true, 00:30:21.515 "num_allocated_clusters": 0, 00:30:21.515 "snapshot": false, 00:30:21.515 "clone": false, 00:30:21.515 "esnap_clone": false 00:30:21.515 } 00:30:21.515 } 00:30:21.515 } 00:30:21.515 ]' 00:30:21.515 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:21.515 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:21.515 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:21.776 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:22.037 19:28:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d0bc6a41-0c77-4b73-b7fa-f877a7bb33f1 -c cachen1p0 --l2p_dram_limit 2 00:30:22.300 [2024-11-27 19:28:31.843794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.844093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:22.300 [2024-11-27 19:28:31.844158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:22.300 [2024-11-27 19:28:31.844170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.844273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.844288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:22.300 [2024-11-27 19:28:31.844301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:30:22.300 [2024-11-27 19:28:31.844309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.844336] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:22.300 [2024-11-27 19:28:31.845102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:22.300 [2024-11-27 19:28:31.845154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.845164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:22.300 [2024-11-27 19:28:31.845178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.820 ms 00:30:22.300 [2024-11-27 19:28:31.845186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.845230] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d3964679-f9fd-4b43-a6c9-6913abbbe4be 00:30:22.300 [2024-11-27 19:28:31.847638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.847696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:22.300 [2024-11-27 19:28:31.847711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:30:22.300 [2024-11-27 19:28:31.847724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.860812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.860872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:22.300 [2024-11-27 19:28:31.860884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.969 ms 00:30:22.300 [2024-11-27 19:28:31.860895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.860951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.860964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:22.300 [2024-11-27 19:28:31.860973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:22.300 [2024-11-27 19:28:31.860987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.861049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.861063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:22.300 [2024-11-27 19:28:31.861077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:22.300 [2024-11-27 19:28:31.861091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.861115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:22.300 [2024-11-27 19:28:31.866232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.866467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:22.300 [2024-11-27 19:28:31.866500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.120 ms 00:30:22.300 [2024-11-27 19:28:31.866510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.866551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.866561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:22.300 [2024-11-27 19:28:31.866574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:22.300 [2024-11-27 19:28:31.866582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.866626] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:22.300 [2024-11-27 19:28:31.866783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:22.300 [2024-11-27 19:28:31.866804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:22.300 [2024-11-27 19:28:31.866817] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:22.300 [2024-11-27 19:28:31.866832] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:22.300 [2024-11-27 19:28:31.866841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:22.300 [2024-11-27 19:28:31.866852] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:22.300 [2024-11-27 19:28:31.866864] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:22.300 [2024-11-27 19:28:31.866877] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:22.300 [2024-11-27 19:28:31.866885] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:22.300 [2024-11-27 19:28:31.866897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.866906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:22.300 [2024-11-27 19:28:31.866919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:30:22.300 [2024-11-27 19:28:31.866927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.867014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.300 [2024-11-27 19:28:31.867036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:22.300 [2024-11-27 19:28:31.867046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:22.300 [2024-11-27 19:28:31.867056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.300 [2024-11-27 19:28:31.867224] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:22.300 [2024-11-27 19:28:31.867240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:22.300 [2024-11-27 19:28:31.867252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:22.300 [2024-11-27 19:28:31.867280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:22.300 [2024-11-27 19:28:31.867302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:22.300 [2024-11-27 19:28:31.867312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:22.300 [2024-11-27 19:28:31.867319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:22.300 [2024-11-27 19:28:31.867338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:22.300 [2024-11-27 19:28:31.867351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:22.300 [2024-11-27 19:28:31.867368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:22.300 [2024-11-27 19:28:31.867375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:22.300 [2024-11-27 19:28:31.867395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:22.300 [2024-11-27 19:28:31.867405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:22.300 [2024-11-27 19:28:31.867422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:22.300 [2024-11-27 19:28:31.867432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:22.300 [2024-11-27 19:28:31.867450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:22.300 [2024-11-27 19:28:31.867459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:22.300 [2024-11-27 19:28:31.867480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:22.300 [2024-11-27 19:28:31.867486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:22.300 [2024-11-27 19:28:31.867503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:22.300 [2024-11-27 19:28:31.867512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:22.300 [2024-11-27 19:28:31.867530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:22.300 [2024-11-27 19:28:31.867538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:22.300 [2024-11-27 19:28:31.867555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:22.300 [2024-11-27 19:28:31.867564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:22.300 [2024-11-27 19:28:31.867579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:22.300 [2024-11-27 19:28:31.867605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:22.300 [2024-11-27 19:28:31.867614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.300 [2024-11-27 19:28:31.867620] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:22.300 [2024-11-27 19:28:31.867631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:22.301 [2024-11-27 19:28:31.867638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:22.301 [2024-11-27 19:28:31.867651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.301 [2024-11-27 19:28:31.867659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:22.301 [2024-11-27 19:28:31.867671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:22.301 [2024-11-27 19:28:31.867679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:22.301 [2024-11-27 19:28:31.867687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:22.301 [2024-11-27 19:28:31.867695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:22.301 [2024-11-27 19:28:31.867705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:22.301 [2024-11-27 19:28:31.867718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:22.301 [2024-11-27 19:28:31.867735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:22.301 [2024-11-27 19:28:31.867754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:22.301 [2024-11-27 19:28:31.867781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:22.301 [2024-11-27 19:28:31.867792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:22.301 [2024-11-27 19:28:31.867800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:22.301 [2024-11-27 19:28:31.867811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:22.301 [2024-11-27 19:28:31.867880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:22.301 [2024-11-27 19:28:31.867891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:22.301 [2024-11-27 19:28:31.867911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:22.301 [2024-11-27 19:28:31.867920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:22.301 [2024-11-27 19:28:31.867929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:22.301 [2024-11-27 19:28:31.867937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.301 [2024-11-27 19:28:31.867948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:22.301 [2024-11-27 19:28:31.867958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:30:22.301 [2024-11-27 19:28:31.867968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.301 [2024-11-27 19:28:31.868011] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:22.301 [2024-11-27 19:28:31.868041] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:27.592 [2024-11-27 19:28:36.222471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.222590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:27.592 [2024-11-27 19:28:36.222613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4354.441 ms 00:30:27.592 [2024-11-27 19:28:36.222626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.260471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.260552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.592 [2024-11-27 19:28:36.260569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.572 ms 00:30:27.592 [2024-11-27 19:28:36.260582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.260688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.260702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:27.592 [2024-11-27 19:28:36.260712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:27.592 [2024-11-27 19:28:36.260734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.301792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.301855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.592 [2024-11-27 19:28:36.301869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.020 ms 00:30:27.592 [2024-11-27 19:28:36.301883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.301923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.301935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.592 [2024-11-27 19:28:36.301945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:27.592 [2024-11-27 19:28:36.301956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.302753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.302813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.592 [2024-11-27 19:28:36.302837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:30:27.592 [2024-11-27 19:28:36.302849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.302907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.302924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.592 [2024-11-27 19:28:36.302934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:27.592 [2024-11-27 19:28:36.302950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.323985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.324043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.592 [2024-11-27 19:28:36.324056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.012 ms 00:30:27.592 [2024-11-27 19:28:36.324068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.349507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:27.592 [2024-11-27 19:28:36.351266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.351315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:27.592 [2024-11-27 19:28:36.351332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.082 ms 00:30:27.592 [2024-11-27 19:28:36.351342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.385011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.385071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:27.592 [2024-11-27 19:28:36.385090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.617 ms 00:30:27.592 [2024-11-27 19:28:36.385099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.385249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.385263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:27.592 [2024-11-27 19:28:36.385280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:30:27.592 [2024-11-27 19:28:36.385290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.411819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.411871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:27.592 [2024-11-27 19:28:36.411889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.463 ms 00:30:27.592 [2024-11-27 19:28:36.411898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.438287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.438598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:27.592 [2024-11-27 19:28:36.438628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.323 ms 00:30:27.592 [2024-11-27 19:28:36.438638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.439473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.439506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:27.592 [2024-11-27 19:28:36.439525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.617 ms 00:30:27.592 [2024-11-27 19:28:36.439535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.592 [2024-11-27 19:28:36.534694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.592 [2024-11-27 19:28:36.534751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:27.592 [2024-11-27 19:28:36.534773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.089 ms 00:30:27.593 [2024-11-27 19:28:36.534783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.564463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.593 [2024-11-27 19:28:36.564520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:27.593 [2024-11-27 19:28:36.564538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.572 ms 00:30:27.593 [2024-11-27 19:28:36.564548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.591863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.593 [2024-11-27 19:28:36.591912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:27.593 [2024-11-27 19:28:36.591927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.255 ms 00:30:27.593 [2024-11-27 19:28:36.591937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.619057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.593 [2024-11-27 19:28:36.619120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:27.593 [2024-11-27 19:28:36.619156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.062 ms 00:30:27.593 [2024-11-27 19:28:36.619165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.619231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.593 [2024-11-27 19:28:36.619241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:27.593 [2024-11-27 19:28:36.619258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:27.593 [2024-11-27 19:28:36.619267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.619400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.593 [2024-11-27 19:28:36.619417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:27.593 [2024-11-27 19:28:36.619430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:27.593 [2024-11-27 19:28:36.619439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.593 [2024-11-27 19:28:36.621371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4776.975 ms, result 0 00:30:27.593 { 00:30:27.593 "name": "ftl", 00:30:27.593 "uuid": "d3964679-f9fd-4b43-a6c9-6913abbbe4be" 00:30:27.593 } 00:30:27.593 19:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:27.593 [2024-11-27 19:28:36.843724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.593 19:28:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:27.593 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:27.853 [2024-11-27 19:28:37.268118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:27.853 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:27.853 [2024-11-27 19:28:37.484753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:28.113 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:28.374 Fill FTL, iteration 1 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83090 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83090 /var/tmp/spdk.tgt.sock 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83090 ']' 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:28.374 19:28:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:28.374 [2024-11-27 19:28:37.932865] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:28.374 [2024-11-27 19:28:37.933674] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83090 ] 00:30:28.634 [2024-11-27 19:28:38.095365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.634 [2024-11-27 19:28:38.214267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.568 19:28:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.568 19:28:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:29.568 19:28:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:29.568 ftln1 00:30:29.568 19:28:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:29.568 19:28:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83090 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83090 ']' 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83090 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83090 00:30:29.825 killing process with pid 83090 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83090' 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83090 00:30:29.825 19:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83090 00:30:31.196 19:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:31.196 19:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:31.196 [2024-11-27 19:28:40.827755] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:31.196 [2024-11-27 19:28:40.827871] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83133 ] 00:30:31.454 [2024-11-27 19:28:40.986992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.454 [2024-11-27 19:28:41.081017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.829  [2024-11-27T19:28:43.848Z] Copying: 206/1024 [MB] (206 MBps) [2024-11-27T19:28:44.790Z] Copying: 461/1024 [MB] (255 MBps) [2024-11-27T19:28:45.731Z] Copying: 714/1024 [MB] (253 MBps) [2024-11-27T19:28:45.731Z] Copying: 962/1024 [MB] (248 MBps) [2024-11-27T19:28:46.303Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:30:36.668 00:30:36.668 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:36.668 Calculate MD5 checksum, iteration 1 00:30:36.668 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.669 19:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.928 [2024-11-27 19:28:46.339383] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:36.928 [2024-11-27 19:28:46.339679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83191 ] 00:30:36.928 [2024-11-27 19:28:46.496845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.186 [2024-11-27 19:28:46.573384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.569  [2024-11-27T19:28:48.778Z] Copying: 629/1024 [MB] (629 MBps) [2024-11-27T19:28:49.039Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:30:39.404 00:30:39.404 19:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:39.404 19:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:41.935 Fill FTL, iteration 2 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=94b2fedd43cdeec18c04632d846a976b 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:41.935 19:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:41.935 [2024-11-27 19:28:51.048792] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:41.935 [2024-11-27 19:28:51.048883] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83247 ] 00:30:41.935 [2024-11-27 19:28:51.204146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.935 [2024-11-27 19:28:51.298501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.312  [2024-11-27T19:28:53.887Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-27T19:28:54.830Z] Copying: 435/1024 [MB] (245 MBps) [2024-11-27T19:28:55.775Z] Copying: 685/1024 [MB] (250 MBps) [2024-11-27T19:28:56.714Z] Copying: 861/1024 [MB] (176 MBps) [2024-11-27T19:28:57.649Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:30:48.014 00:30:48.014 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:48.014 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:48.014 Calculate MD5 checksum, iteration 2 00:30:48.014 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:48.014 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:48.015 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:48.015 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:48.015 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:48.015 19:28:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:48.015 [2024-11-27 19:28:57.400608] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:48.015 [2024-11-27 19:28:57.400725] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83311 ] 00:30:48.015 [2024-11-27 19:28:57.560959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.273 [2024-11-27 19:28:57.654296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.651  [2024-11-27T19:28:59.856Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-27T19:29:00.799Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:30:51.164 00:30:51.164 19:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:51.164 19:29:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:53.781 19:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:53.781 19:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=15d35a652fc95fc3aeb6d25cefc32c9a 00:30:53.781 19:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:53.781 19:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:53.781 19:29:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:53.781 [2024-11-27 19:29:02.996629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.781 [2024-11-27 19:29:02.996684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.781 [2024-11-27 19:29:02.996698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:53.781 [2024-11-27 19:29:02.996704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.781 [2024-11-27 19:29:02.996722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.781 [2024-11-27 19:29:02.996732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.781 [2024-11-27 19:29:02.996740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:53.781 [2024-11-27 19:29:02.996746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.781 [2024-11-27 19:29:02.996775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.781 [2024-11-27 19:29:02.996782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.781 [2024-11-27 19:29:02.996788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:53.781 [2024-11-27 19:29:02.996794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.781 [2024-11-27 19:29:02.996849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.208 ms, result 0 00:30:53.781 true 00:30:53.781 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.781 { 00:30:53.781 "name": "ftl", 00:30:53.781 "properties": [ 00:30:53.781 { 00:30:53.781 "name": "superblock_version", 00:30:53.781 "value": 5, 00:30:53.781 "read-only": true 00:30:53.781 }, 00:30:53.781 { 00:30:53.781 "name": "base_device", 00:30:53.781 "bands": [ 00:30:53.781 { 00:30:53.781 "id": 0, 00:30:53.781 "state": "FREE", 00:30:53.781 "validity": 0.0 00:30:53.781 }, 00:30:53.782 { 00:30:53.782 "id": 1, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 2, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 3, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 4, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 5, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 6, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 7, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 8, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 9, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 10, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 11, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 12, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 13, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 14, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 15, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 16, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 17, 00:30:53.782 "state": "FREE", 00:30:53.782 "validity": 0.0 00:30:53.782 } 00:30:53.782 ], 00:30:53.782 "read-only": true 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "name": "cache_device", 00:30:53.782 "type": "bdev", 00:30:53.782 "chunks": [ 00:30:53.782 { 00:30:53.782 "id": 0, 00:30:53.782 "state": "INACTIVE", 00:30:53.782 "utilization": 0.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 1, 00:30:53.782 "state": "CLOSED", 00:30:53.782 "utilization": 1.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 2, 00:30:53.782 "state": "CLOSED", 00:30:53.782 "utilization": 1.0 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 3, 00:30:53.782 "state": "OPEN", 00:30:53.782 "utilization": 0.001953125 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "id": 4, 00:30:53.782 "state": "OPEN", 00:30:53.782 "utilization": 0.0 00:30:53.782 } 00:30:53.782 ], 00:30:53.782 "read-only": true 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "name": "verbose_mode", 00:30:53.782 "value": true, 00:30:53.782 "unit": "", 00:30:53.782 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:53.782 }, 00:30:53.782 { 00:30:53.782 "name": "prep_upgrade_on_shutdown", 00:30:53.782 "value": false, 00:30:53.782 "unit": "", 00:30:53.782 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:53.782 } 00:30:53.782 ] 00:30:53.782 } 00:30:53.782 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:53.782 [2024-11-27 19:29:03.392905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.782 [2024-11-27 19:29:03.392937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.782 [2024-11-27 19:29:03.392945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:53.782 [2024-11-27 19:29:03.392951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.782 [2024-11-27 19:29:03.392967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.782 [2024-11-27 19:29:03.392974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.782 [2024-11-27 19:29:03.392980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:53.782 [2024-11-27 19:29:03.392986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.782 [2024-11-27 19:29:03.393000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.782 [2024-11-27 19:29:03.393006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.782 [2024-11-27 19:29:03.393012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:53.782 [2024-11-27 19:29:03.393018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.782 [2024-11-27 19:29:03.393058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.143 ms, result 0 00:30:53.782 true 00:30:53.782 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:53.782 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:53.782 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:54.041 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:54.041 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:54.041 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:54.299 [2024-11-27 19:29:03.797283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.299 [2024-11-27 19:29:03.797312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:54.299 [2024-11-27 19:29:03.797320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:54.299 [2024-11-27 19:29:03.797326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.299 [2024-11-27 19:29:03.797341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.299 [2024-11-27 19:29:03.797347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:54.299 [2024-11-27 19:29:03.797354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:54.299 [2024-11-27 19:29:03.797359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.299 [2024-11-27 19:29:03.797374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.299 [2024-11-27 19:29:03.797379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:54.299 [2024-11-27 19:29:03.797385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:54.299 [2024-11-27 19:29:03.797391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.299 [2024-11-27 19:29:03.797431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.137 ms, result 0 00:30:54.299 true 00:30:54.299 19:29:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:54.558 { 00:30:54.558 "name": "ftl", 00:30:54.558 "properties": [ 00:30:54.558 { 00:30:54.558 "name": "superblock_version", 00:30:54.558 "value": 5, 00:30:54.558 "read-only": true 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "name": "base_device", 00:30:54.558 "bands": [ 00:30:54.558 { 00:30:54.558 "id": 0, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 1, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 2, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 3, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 4, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 5, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 6, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 7, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 8, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 9, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 10, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 11, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 12, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 13, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 14, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 15, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 16, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 17, 00:30:54.558 "state": "FREE", 00:30:54.558 "validity": 0.0 00:30:54.558 } 00:30:54.558 ], 00:30:54.558 "read-only": true 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "name": "cache_device", 00:30:54.558 "type": "bdev", 00:30:54.558 "chunks": [ 00:30:54.558 { 00:30:54.558 "id": 0, 00:30:54.558 "state": "INACTIVE", 00:30:54.558 "utilization": 0.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 1, 00:30:54.558 "state": "CLOSED", 00:30:54.558 "utilization": 1.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.558 "id": 2, 00:30:54.558 "state": "CLOSED", 00:30:54.558 "utilization": 1.0 00:30:54.558 }, 00:30:54.558 { 00:30:54.559 "id": 3, 00:30:54.559 "state": "OPEN", 00:30:54.559 "utilization": 0.001953125 00:30:54.559 }, 00:30:54.559 { 00:30:54.559 "id": 4, 00:30:54.559 "state": "OPEN", 00:30:54.559 "utilization": 0.0 00:30:54.559 } 00:30:54.559 ], 00:30:54.559 "read-only": true 00:30:54.559 }, 00:30:54.559 { 00:30:54.559 "name": "verbose_mode", 00:30:54.559 "value": true, 00:30:54.559 "unit": "", 00:30:54.559 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:54.559 }, 00:30:54.559 { 00:30:54.559 "name": "prep_upgrade_on_shutdown", 00:30:54.559 "value": true, 00:30:54.559 "unit": "", 00:30:54.559 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:54.559 } 00:30:54.559 ] 00:30:54.559 } 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82957 ]] 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82957 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82957 ']' 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82957 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82957 00:30:54.559 killing process with pid 82957 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82957' 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82957 00:30:54.559 19:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82957 00:30:55.126 [2024-11-27 19:29:04.605034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:55.126 [2024-11-27 19:29:04.615479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.126 [2024-11-27 19:29:04.615514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:55.126 [2024-11-27 19:29:04.615526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:55.126 [2024-11-27 19:29:04.615533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.126 [2024-11-27 19:29:04.615551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:55.126 [2024-11-27 19:29:04.617749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.126 [2024-11-27 19:29:04.617774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:55.126 [2024-11-27 19:29:04.617783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.186 ms 00:30:55.126 [2024-11-27 19:29:04.617794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.254 [2024-11-27 19:29:12.096049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.254 [2024-11-27 19:29:12.096111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:03.254 [2024-11-27 19:29:12.096140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7478.208 ms 00:31:03.254 [2024-11-27 19:29:12.096148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.254 [2024-11-27 19:29:12.097512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.254 [2024-11-27 19:29:12.097538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:03.254 [2024-11-27 19:29:12.097546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.350 ms 00:31:03.254 [2024-11-27 19:29:12.097553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.098426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.098561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:03.255 [2024-11-27 19:29:12.098574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:31:03.255 [2024-11-27 19:29:12.098585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.107198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.107225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:03.255 [2024-11-27 19:29:12.107233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.585 ms 00:31:03.255 [2024-11-27 19:29:12.107240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.113065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.113091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:03.255 [2024-11-27 19:29:12.113100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.800 ms 00:31:03.255 [2024-11-27 19:29:12.113107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.113181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.113194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:03.255 [2024-11-27 19:29:12.113201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:03.255 [2024-11-27 19:29:12.113207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.121042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.121158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:03.255 [2024-11-27 19:29:12.121170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.822 ms 00:31:03.255 [2024-11-27 19:29:12.121175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.128987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.129077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:03.255 [2024-11-27 19:29:12.129089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.789 ms 00:31:03.255 [2024-11-27 19:29:12.129095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.136641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.136663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:03.255 [2024-11-27 19:29:12.136670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.523 ms 00:31:03.255 [2024-11-27 19:29:12.136676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.144025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.144115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:03.255 [2024-11-27 19:29:12.144138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.300 ms 00:31:03.255 [2024-11-27 19:29:12.144144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.144165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:03.255 [2024-11-27 19:29:12.144183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.255 [2024-11-27 19:29:12.144191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.255 [2024-11-27 19:29:12.144197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:03.255 [2024-11-27 19:29:12.144204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:03.255 [2024-11-27 19:29:12.144297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:03.255 [2024-11-27 19:29:12.144303] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3964679-f9fd-4b43-a6c9-6913abbbe4be 00:31:03.255 [2024-11-27 19:29:12.144309] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:03.255 [2024-11-27 19:29:12.144315] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:03.255 [2024-11-27 19:29:12.144320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:03.255 [2024-11-27 19:29:12.144326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:03.255 [2024-11-27 19:29:12.144334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:03.255 [2024-11-27 19:29:12.144340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:03.255 [2024-11-27 19:29:12.144348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:03.255 [2024-11-27 19:29:12.144354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:03.255 [2024-11-27 19:29:12.144359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:03.255 [2024-11-27 19:29:12.144367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.144374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:03.255 [2024-11-27 19:29:12.144381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:31:03.255 [2024-11-27 19:29:12.144388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.154513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.154601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:03.255 [2024-11-27 19:29:12.154617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.113 ms 00:31:03.255 [2024-11-27 19:29:12.154624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.154914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.255 [2024-11-27 19:29:12.154923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:03.255 [2024-11-27 19:29:12.154929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:31:03.255 [2024-11-27 19:29:12.154935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.189744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.189840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:03.255 [2024-11-27 19:29:12.189852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.189859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.189883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.189890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:03.255 [2024-11-27 19:29:12.189897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.189903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.189963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.189972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:03.255 [2024-11-27 19:29:12.189983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.189991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.190003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.190011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:03.255 [2024-11-27 19:29:12.190018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.190024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.252476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.252595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:03.255 [2024-11-27 19:29:12.252614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.252621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.303043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.303079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:03.255 [2024-11-27 19:29:12.303095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.303102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.255 [2024-11-27 19:29:12.303181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.255 [2024-11-27 19:29:12.303190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:03.255 [2024-11-27 19:29:12.303197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.255 [2024-11-27 19:29:12.303209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.256 [2024-11-27 19:29:12.303268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:03.256 [2024-11-27 19:29:12.303275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.256 [2024-11-27 19:29:12.303282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.256 [2024-11-27 19:29:12.303383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:03.256 [2024-11-27 19:29:12.303389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.256 [2024-11-27 19:29:12.303395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.256 [2024-11-27 19:29:12.303434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:03.256 [2024-11-27 19:29:12.303441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.256 [2024-11-27 19:29:12.303447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.256 [2024-11-27 19:29:12.303489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:03.256 [2024-11-27 19:29:12.303498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.256 [2024-11-27 19:29:12.303505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.256 [2024-11-27 19:29:12.303557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:03.256 [2024-11-27 19:29:12.303564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.256 [2024-11-27 19:29:12.303571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.256 [2024-11-27 19:29:12.303682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7688.153 ms, result 0 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83495 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83495 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83495 ']' 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.453 19:29:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:07.453 [2024-11-27 19:29:16.372170] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:07.453 [2024-11-27 19:29:16.372907] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83495 ] 00:31:07.453 [2024-11-27 19:29:16.529362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.453 [2024-11-27 19:29:16.621505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.711 [2024-11-27 19:29:17.251504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:07.711 [2024-11-27 19:29:17.251564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:07.970 [2024-11-27 19:29:17.400277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.400311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:07.970 [2024-11-27 19:29:17.400323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:07.970 [2024-11-27 19:29:17.400330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.400379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.400388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:07.970 [2024-11-27 19:29:17.400395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:07.970 [2024-11-27 19:29:17.400401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.400416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:07.970 [2024-11-27 19:29:17.400928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:07.970 [2024-11-27 19:29:17.400942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.400949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:07.970 [2024-11-27 19:29:17.400956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:31:07.970 [2024-11-27 19:29:17.400962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.402221] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:07.970 [2024-11-27 19:29:17.412661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.412691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:07.970 [2024-11-27 19:29:17.412700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.441 ms 00:31:07.970 [2024-11-27 19:29:17.412707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.412753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.412760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:07.970 [2024-11-27 19:29:17.412767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:07.970 [2024-11-27 19:29:17.412773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.418903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.418928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:07.970 [2024-11-27 19:29:17.418935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.086 ms 00:31:07.970 [2024-11-27 19:29:17.418941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.418986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.418993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:07.970 [2024-11-27 19:29:17.418999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:07.970 [2024-11-27 19:29:17.419005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.419049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.419059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:07.970 [2024-11-27 19:29:17.419065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:07.970 [2024-11-27 19:29:17.419072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.419104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:07.970 [2024-11-27 19:29:17.422130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.422152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:07.970 [2024-11-27 19:29:17.422163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.046 ms 00:31:07.970 [2024-11-27 19:29:17.422169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.422193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.422200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:07.970 [2024-11-27 19:29:17.422207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:07.970 [2024-11-27 19:29:17.422213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.422228] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:07.970 [2024-11-27 19:29:17.422248] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:07.970 [2024-11-27 19:29:17.422277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:07.970 [2024-11-27 19:29:17.422289] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:07.970 [2024-11-27 19:29:17.422374] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:07.970 [2024-11-27 19:29:17.422383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:07.970 [2024-11-27 19:29:17.422391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:07.970 [2024-11-27 19:29:17.422399] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422407] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422414] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:07.970 [2024-11-27 19:29:17.422420] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:07.970 [2024-11-27 19:29:17.422426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:07.970 [2024-11-27 19:29:17.422432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:07.970 [2024-11-27 19:29:17.422438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.422444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:07.970 [2024-11-27 19:29:17.422450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:31:07.970 [2024-11-27 19:29:17.422455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.422522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.970 [2024-11-27 19:29:17.422529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:07.970 [2024-11-27 19:29:17.422537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:07.970 [2024-11-27 19:29:17.422544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.970 [2024-11-27 19:29:17.422622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:07.970 [2024-11-27 19:29:17.422630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:07.970 [2024-11-27 19:29:17.422636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:07.970 [2024-11-27 19:29:17.422654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:07.970 [2024-11-27 19:29:17.422665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:07.970 [2024-11-27 19:29:17.422672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:07.970 [2024-11-27 19:29:17.422676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:07.970 [2024-11-27 19:29:17.422689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:07.970 [2024-11-27 19:29:17.422694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:07.970 [2024-11-27 19:29:17.422706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:07.970 [2024-11-27 19:29:17.422711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:07.970 [2024-11-27 19:29:17.422721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:07.970 [2024-11-27 19:29:17.422725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.970 [2024-11-27 19:29:17.422731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:07.970 [2024-11-27 19:29:17.422735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:07.970 [2024-11-27 19:29:17.422740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:07.970 [2024-11-27 19:29:17.422756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:07.970 [2024-11-27 19:29:17.422761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:07.970 [2024-11-27 19:29:17.422771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:07.970 [2024-11-27 19:29:17.422776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.970 [2024-11-27 19:29:17.422781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:07.971 [2024-11-27 19:29:17.422786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:07.971 [2024-11-27 19:29:17.422790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:07.971 [2024-11-27 19:29:17.422795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:07.971 [2024-11-27 19:29:17.422800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:07.971 [2024-11-27 19:29:17.422806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:07.971 [2024-11-27 19:29:17.422816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:07.971 [2024-11-27 19:29:17.422820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:07.971 [2024-11-27 19:29:17.422830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:07.971 [2024-11-27 19:29:17.422844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:07.971 [2024-11-27 19:29:17.422849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422855] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:07.971 [2024-11-27 19:29:17.422863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:07.971 [2024-11-27 19:29:17.422869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:07.971 [2024-11-27 19:29:17.422876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:07.971 [2024-11-27 19:29:17.422882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:07.971 [2024-11-27 19:29:17.422887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:07.971 [2024-11-27 19:29:17.422892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:07.971 [2024-11-27 19:29:17.422898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:07.971 [2024-11-27 19:29:17.422903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:07.971 [2024-11-27 19:29:17.422908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:07.971 [2024-11-27 19:29:17.422914] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:07.971 [2024-11-27 19:29:17.422922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:07.971 [2024-11-27 19:29:17.422934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:07.971 [2024-11-27 19:29:17.422950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:07.971 [2024-11-27 19:29:17.422955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:07.971 [2024-11-27 19:29:17.422960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:07.971 [2024-11-27 19:29:17.422966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.422997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:07.971 [2024-11-27 19:29:17.423002] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:07.971 [2024-11-27 19:29:17.423008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.423014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:07.971 [2024-11-27 19:29:17.423020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:07.971 [2024-11-27 19:29:17.423026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:07.971 [2024-11-27 19:29:17.423032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:07.971 [2024-11-27 19:29:17.423038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.971 [2024-11-27 19:29:17.423045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:07.971 [2024-11-27 19:29:17.423051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:31:07.971 [2024-11-27 19:29:17.423056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.971 [2024-11-27 19:29:17.423110] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:07.971 [2024-11-27 19:29:17.423121] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:11.247 [2024-11-27 19:29:20.865072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.248 [2024-11-27 19:29:20.865184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:11.248 [2024-11-27 19:29:20.865206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3441.945 ms 00:31:11.248 [2024-11-27 19:29:20.865216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.509 [2024-11-27 19:29:20.901857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.509 [2024-11-27 19:29:20.901931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:11.509 [2024-11-27 19:29:20.901949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.363 ms 00:31:11.509 [2024-11-27 19:29:20.901969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.509 [2024-11-27 19:29:20.902088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.509 [2024-11-27 19:29:20.902103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:11.509 [2024-11-27 19:29:20.902114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:11.509 [2024-11-27 19:29:20.902146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.509 [2024-11-27 19:29:20.941381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.509 [2024-11-27 19:29:20.941442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:11.509 [2024-11-27 19:29:20.941464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.175 ms 00:31:11.509 [2024-11-27 19:29:20.941474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.509 [2024-11-27 19:29:20.941526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.509 [2024-11-27 19:29:20.941536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:11.509 [2024-11-27 19:29:20.941546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:11.509 [2024-11-27 19:29:20.941555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.509 [2024-11-27 19:29:20.942337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.509 [2024-11-27 19:29:20.942376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:11.509 [2024-11-27 19:29:20.942389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.714 ms 00:31:11.509 [2024-11-27 19:29:20.942407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:20.942467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:20.942481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:11.510 [2024-11-27 19:29:20.942492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:11.510 [2024-11-27 19:29:20.942502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:20.963724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:20.964058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:11.510 [2024-11-27 19:29:20.964081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.193 ms 00:31:11.510 [2024-11-27 19:29:20.964092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:20.990387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:11.510 [2024-11-27 19:29:20.990449] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:11.510 [2024-11-27 19:29:20.990465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:20.990477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:11.510 [2024-11-27 19:29:20.990489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.207 ms 00:31:11.510 [2024-11-27 19:29:20.990498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.005794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.005847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:11.510 [2024-11-27 19:29:21.005861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.232 ms 00:31:11.510 [2024-11-27 19:29:21.005872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.018828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.018874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:11.510 [2024-11-27 19:29:21.018888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.893 ms 00:31:11.510 [2024-11-27 19:29:21.018898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.031691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.031732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:11.510 [2024-11-27 19:29:21.031744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.742 ms 00:31:11.510 [2024-11-27 19:29:21.031752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.032483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.032541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:11.510 [2024-11-27 19:29:21.032552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:31:11.510 [2024-11-27 19:29:21.032561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.105328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.105636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:11.510 [2024-11-27 19:29:21.105662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.742 ms 00:31:11.510 [2024-11-27 19:29:21.105673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.117930] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:11.510 [2024-11-27 19:29:21.119189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.119404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:11.510 [2024-11-27 19:29:21.119418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.379 ms 00:31:11.510 [2024-11-27 19:29:21.119428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.119529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.119546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:11.510 [2024-11-27 19:29:21.119557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:11.510 [2024-11-27 19:29:21.119565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.119652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.119667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:11.510 [2024-11-27 19:29:21.119676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:11.510 [2024-11-27 19:29:21.119685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.119715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.119726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:11.510 [2024-11-27 19:29:21.119740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:11.510 [2024-11-27 19:29:21.119749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.510 [2024-11-27 19:29:21.119789] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:11.510 [2024-11-27 19:29:21.119802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.510 [2024-11-27 19:29:21.119810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:11.510 [2024-11-27 19:29:21.119819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:11.510 [2024-11-27 19:29:21.119830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.771 [2024-11-27 19:29:21.145437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.771 [2024-11-27 19:29:21.145495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:11.771 [2024-11-27 19:29:21.145509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.579 ms 00:31:11.771 [2024-11-27 19:29:21.145517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.771 [2024-11-27 19:29:21.145617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:11.771 [2024-11-27 19:29:21.145627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:11.771 [2024-11-27 19:29:21.145637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:31:11.771 [2024-11-27 19:29:21.145645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:11.771 [2024-11-27 19:29:21.147192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3746.256 ms, result 0 00:31:11.771 [2024-11-27 19:29:21.161890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.771 [2024-11-27 19:29:21.177888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:11.771 [2024-11-27 19:29:21.186114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:11.771 19:29:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.771 19:29:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:11.771 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:11.771 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:11.771 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:12.033 [2024-11-27 19:29:21.418075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.033 [2024-11-27 19:29:21.418144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:12.033 [2024-11-27 19:29:21.418162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:12.033 [2024-11-27 19:29:21.418172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.033 [2024-11-27 19:29:21.418198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.033 [2024-11-27 19:29:21.418208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:12.033 [2024-11-27 19:29:21.418217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.033 [2024-11-27 19:29:21.418226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.033 [2024-11-27 19:29:21.418247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.033 [2024-11-27 19:29:21.418257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:12.033 [2024-11-27 19:29:21.418266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:12.033 [2024-11-27 19:29:21.418275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.033 [2024-11-27 19:29:21.418336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.259 ms, result 0 00:31:12.033 true 00:31:12.033 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.033 { 00:31:12.033 "name": "ftl", 00:31:12.033 "properties": [ 00:31:12.033 { 00:31:12.033 "name": "superblock_version", 00:31:12.033 "value": 5, 00:31:12.033 "read-only": true 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "name": "base_device", 00:31:12.033 "bands": [ 00:31:12.033 { 00:31:12.033 "id": 0, 00:31:12.033 "state": "CLOSED", 00:31:12.033 "validity": 1.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 1, 00:31:12.033 "state": "CLOSED", 00:31:12.033 "validity": 1.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 2, 00:31:12.033 "state": "CLOSED", 00:31:12.033 "validity": 0.007843137254901933 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 3, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 4, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 5, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 6, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 7, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 8, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 9, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 10, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 11, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 12, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 13, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 14, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 15, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 16, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 17, 00:31:12.033 "state": "FREE", 00:31:12.033 "validity": 0.0 00:31:12.033 } 00:31:12.033 ], 00:31:12.033 "read-only": true 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "name": "cache_device", 00:31:12.033 "type": "bdev", 00:31:12.033 "chunks": [ 00:31:12.033 { 00:31:12.033 "id": 0, 00:31:12.033 "state": "INACTIVE", 00:31:12.033 "utilization": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 1, 00:31:12.033 "state": "OPEN", 00:31:12.033 "utilization": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 2, 00:31:12.033 "state": "OPEN", 00:31:12.033 "utilization": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 3, 00:31:12.033 "state": "FREE", 00:31:12.033 "utilization": 0.0 00:31:12.033 }, 00:31:12.033 { 00:31:12.033 "id": 4, 00:31:12.033 "state": "FREE", 00:31:12.033 "utilization": 0.0 00:31:12.033 } 00:31:12.033 ], 00:31:12.033 "read-only": true 00:31:12.033 }, 00:31:12.033 { 00:31:12.034 "name": "verbose_mode", 00:31:12.034 "value": true, 00:31:12.034 "unit": "", 00:31:12.034 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:12.034 }, 00:31:12.034 { 00:31:12.034 "name": "prep_upgrade_on_shutdown", 00:31:12.034 "value": false, 00:31:12.034 "unit": "", 00:31:12.034 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:12.034 } 00:31:12.034 ] 00:31:12.034 } 00:31:12.295 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:12.295 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:12.295 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.295 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:12.295 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:12.296 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:12.296 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:12.296 19:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:12.557 Validate MD5 checksum, iteration 1 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:12.557 19:29:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:12.819 [2024-11-27 19:29:22.204218] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:12.819 [2024-11-27 19:29:22.204365] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83575 ] 00:31:12.819 [2024-11-27 19:29:22.366787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.080 [2024-11-27 19:29:22.485921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.467  [2024-11-27T19:29:25.047Z] Copying: 461/1024 [MB] (461 MBps) [2024-11-27T19:29:25.308Z] Copying: 994/1024 [MB] (533 MBps) [2024-11-27T19:29:26.248Z] Copying: 1024/1024 [MB] (average 500 MBps) 00:31:16.613 00:31:16.613 19:29:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:16.613 19:29:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:19.146 Validate MD5 checksum, iteration 2 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=94b2fedd43cdeec18c04632d846a976b 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 94b2fedd43cdeec18c04632d846a976b != \9\4\b\2\f\e\d\d\4\3\c\d\e\e\c\1\8\c\0\4\6\3\2\d\8\4\6\a\9\7\6\b ]] 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:19.146 19:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:19.146 [2024-11-27 19:29:28.269090] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:19.146 [2024-11-27 19:29:28.269353] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83643 ] 00:31:19.146 [2024-11-27 19:29:28.424579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.146 [2024-11-27 19:29:28.498992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.530  [2024-11-27T19:29:30.736Z] Copying: 667/1024 [MB] (667 MBps) [2024-11-27T19:29:34.925Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:31:25.290 00:31:25.290 19:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:25.290 19:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=15d35a652fc95fc3aeb6d25cefc32c9a 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 15d35a652fc95fc3aeb6d25cefc32c9a != \1\5\d\3\5\a\6\5\2\f\c\9\5\f\c\3\a\e\b\6\d\2\5\c\e\f\c\3\2\c\9\a ]] 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83495 ]] 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83495 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83732 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83732 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83732 ']' 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.665 19:29:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.923 [2024-11-27 19:29:36.331473] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:26.923 [2024-11-27 19:29:36.331564] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83732 ] 00:31:26.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83495 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:26.923 [2024-11-27 19:29:36.480699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.181 [2024-11-27 19:29:36.571500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.750 [2024-11-27 19:29:37.200119] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.750 [2024-11-27 19:29:37.200188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.750 [2024-11-27 19:29:37.356762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.750 [2024-11-27 19:29:37.356808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:27.750 [2024-11-27 19:29:37.356821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:27.750 [2024-11-27 19:29:37.356830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.750 [2024-11-27 19:29:37.356884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.356894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:27.751 [2024-11-27 19:29:37.356902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:27.751 [2024-11-27 19:29:37.356909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.356930] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:27.751 [2024-11-27 19:29:37.357601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:27.751 [2024-11-27 19:29:37.357621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.357629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:27.751 [2024-11-27 19:29:37.357637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.696 ms 00:31:27.751 [2024-11-27 19:29:37.357644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.357943] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:27.751 [2024-11-27 19:29:37.374001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.374033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:27.751 [2024-11-27 19:29:37.374045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.060 ms 00:31:27.751 [2024-11-27 19:29:37.374053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.382920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.382950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:27.751 [2024-11-27 19:29:37.382960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:27.751 [2024-11-27 19:29:37.382967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.383296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.383313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:27.751 [2024-11-27 19:29:37.383322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:31:27.751 [2024-11-27 19:29:37.383329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.383376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.383389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:27.751 [2024-11-27 19:29:37.383398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:27.751 [2024-11-27 19:29:37.383405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.383428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.751 [2024-11-27 19:29:37.383436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:27.751 [2024-11-27 19:29:37.383444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:27.751 [2024-11-27 19:29:37.383451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.751 [2024-11-27 19:29:37.383469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:28.013 [2024-11-27 19:29:37.386466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.013 [2024-11-27 19:29:37.386490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:28.013 [2024-11-27 19:29:37.386499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.001 ms 00:31:28.013 [2024-11-27 19:29:37.386509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.013 [2024-11-27 19:29:37.386534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.013 [2024-11-27 19:29:37.386542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:28.013 [2024-11-27 19:29:37.386550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.013 [2024-11-27 19:29:37.386557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.013 [2024-11-27 19:29:37.386577] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:28.014 [2024-11-27 19:29:37.386593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:28.014 [2024-11-27 19:29:37.386626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:28.014 [2024-11-27 19:29:37.386642] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:28.014 [2024-11-27 19:29:37.386746] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:28.014 [2024-11-27 19:29:37.386756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:28.014 [2024-11-27 19:29:37.386766] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:28.014 [2024-11-27 19:29:37.386775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:28.014 [2024-11-27 19:29:37.386784] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:28.014 [2024-11-27 19:29:37.386791] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:28.014 [2024-11-27 19:29:37.386798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:28.014 [2024-11-27 19:29:37.386806] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:28.014 [2024-11-27 19:29:37.386812] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:28.014 [2024-11-27 19:29:37.386822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.014 [2024-11-27 19:29:37.386829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:28.014 [2024-11-27 19:29:37.386837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:31:28.014 [2024-11-27 19:29:37.386843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.014 [2024-11-27 19:29:37.386927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.014 [2024-11-27 19:29:37.386934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:28.014 [2024-11-27 19:29:37.386941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:31:28.014 [2024-11-27 19:29:37.386948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.014 [2024-11-27 19:29:37.387061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:28.014 [2024-11-27 19:29:37.387074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:28.014 [2024-11-27 19:29:37.387082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:28.014 [2024-11-27 19:29:37.387139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:28.014 [2024-11-27 19:29:37.387154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:28.014 [2024-11-27 19:29:37.387161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:28.014 [2024-11-27 19:29:37.387168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:28.014 [2024-11-27 19:29:37.387181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:28.014 [2024-11-27 19:29:37.387187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:28.014 [2024-11-27 19:29:37.387200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:28.014 [2024-11-27 19:29:37.387206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:28.014 [2024-11-27 19:29:37.387219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:28.014 [2024-11-27 19:29:37.387225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:28.014 [2024-11-27 19:29:37.387238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:28.014 [2024-11-27 19:29:37.387263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:28.014 [2024-11-27 19:29:37.387282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:28.014 [2024-11-27 19:29:37.387301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:28.014 [2024-11-27 19:29:37.387319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:28.014 [2024-11-27 19:29:37.387338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:28.014 [2024-11-27 19:29:37.387359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:28.014 [2024-11-27 19:29:37.387379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:28.014 [2024-11-27 19:29:37.387385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:28.014 [2024-11-27 19:29:37.387399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:28.014 [2024-11-27 19:29:37.387406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:28.014 [2024-11-27 19:29:37.387420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:28.014 [2024-11-27 19:29:37.387427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:28.014 [2024-11-27 19:29:37.387433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:28.014 [2024-11-27 19:29:37.387445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:28.014 [2024-11-27 19:29:37.387452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:28.014 [2024-11-27 19:29:37.387458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:28.014 [2024-11-27 19:29:37.387466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:28.014 [2024-11-27 19:29:37.387475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:28.014 [2024-11-27 19:29:37.387491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:28.014 [2024-11-27 19:29:37.387512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:28.014 [2024-11-27 19:29:37.387519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:28.014 [2024-11-27 19:29:37.387526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:28.014 [2024-11-27 19:29:37.387532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:28.014 [2024-11-27 19:29:37.387582] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:28.014 [2024-11-27 19:29:37.387590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.014 [2024-11-27 19:29:37.387608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:28.014 [2024-11-27 19:29:37.387614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:28.014 [2024-11-27 19:29:37.387621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:28.014 [2024-11-27 19:29:37.387628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.387635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:28.015 [2024-11-27 19:29:37.387642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:31:28.015 [2024-11-27 19:29:37.387649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.411394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.411424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:28.015 [2024-11-27 19:29:37.411433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.696 ms 00:31:28.015 [2024-11-27 19:29:37.411441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.411477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.411485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:28.015 [2024-11-27 19:29:37.411493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:28.015 [2024-11-27 19:29:37.411500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.441218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.441248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:28.015 [2024-11-27 19:29:37.441257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.669 ms 00:31:28.015 [2024-11-27 19:29:37.441265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.441290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.441298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:28.015 [2024-11-27 19:29:37.441306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:28.015 [2024-11-27 19:29:37.441316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.441401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.441411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:28.015 [2024-11-27 19:29:37.441419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:28.015 [2024-11-27 19:29:37.441426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.441462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.441470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:28.015 [2024-11-27 19:29:37.441479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:28.015 [2024-11-27 19:29:37.441486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.455323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.455352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:28.015 [2024-11-27 19:29:37.455362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.812 ms 00:31:28.015 [2024-11-27 19:29:37.455373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.455474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.455485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:28.015 [2024-11-27 19:29:37.455493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:28.015 [2024-11-27 19:29:37.455500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.485732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.485859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:28.015 [2024-11-27 19:29:37.485876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.213 ms 00:31:28.015 [2024-11-27 19:29:37.485885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.495290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.495325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:28.015 [2024-11-27 19:29:37.495335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:31:28.015 [2024-11-27 19:29:37.495342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.550036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.550097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:28.015 [2024-11-27 19:29:37.550110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.642 ms 00:31:28.015 [2024-11-27 19:29:37.550119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.550275] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:28.015 [2024-11-27 19:29:37.550369] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:28.015 [2024-11-27 19:29:37.550458] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:28.015 [2024-11-27 19:29:37.550547] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:28.015 [2024-11-27 19:29:37.550557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.550565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:28.015 [2024-11-27 19:29:37.550573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:31:28.015 [2024-11-27 19:29:37.550581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.550629] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:28.015 [2024-11-27 19:29:37.550641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.550651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:28.015 [2024-11-27 19:29:37.550660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:28.015 [2024-11-27 19:29:37.550667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.565771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.565806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:28.015 [2024-11-27 19:29:37.565817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.084 ms 00:31:28.015 [2024-11-27 19:29:37.565824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.574297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.574326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:28.015 [2024-11-27 19:29:37.574336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:28.015 [2024-11-27 19:29:37.574343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.015 [2024-11-27 19:29:37.574422] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:28.015 [2024-11-27 19:29:37.574552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.015 [2024-11-27 19:29:37.574562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:28.015 [2024-11-27 19:29:37.574571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:31:28.015 [2024-11-27 19:29:37.574578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.587 [2024-11-27 19:29:38.190956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.587 [2024-11-27 19:29:38.191009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:28.587 [2024-11-27 19:29:38.191023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.555 ms 00:31:28.587 [2024-11-27 19:29:38.191032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.587 [2024-11-27 19:29:38.195167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.587 [2024-11-27 19:29:38.195199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:28.587 [2024-11-27 19:29:38.195209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:31:28.587 [2024-11-27 19:29:38.195222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.587 [2024-11-27 19:29:38.196145] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:28.587 [2024-11-27 19:29:38.196226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.587 [2024-11-27 19:29:38.196238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:28.587 [2024-11-27 19:29:38.196248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.976 ms 00:31:28.587 [2024-11-27 19:29:38.196256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.587 [2024-11-27 19:29:38.196289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.587 [2024-11-27 19:29:38.196300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:28.587 [2024-11-27 19:29:38.196309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.587 [2024-11-27 19:29:38.196322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.587 [2024-11-27 19:29:38.196357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 621.936 ms, result 0 00:31:28.587 [2024-11-27 19:29:38.196395] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:28.587 [2024-11-27 19:29:38.196470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.587 [2024-11-27 19:29:38.196480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:28.587 [2024-11-27 19:29:38.196488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:31:28.587 [2024-11-27 19:29:38.196495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.776577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.776625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:29.165 [2024-11-27 19:29:38.776646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 579.139 ms 00:31:29.165 [2024-11-27 19:29:38.776652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.780180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.780208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:29.165 [2024-11-27 19:29:38.780216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:31:29.165 [2024-11-27 19:29:38.780222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.780526] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:29.165 [2024-11-27 19:29:38.780546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.780553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:29.165 [2024-11-27 19:29:38.780560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:31:29.165 [2024-11-27 19:29:38.780566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.780587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.780594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:29.165 [2024-11-27 19:29:38.780600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:29.165 [2024-11-27 19:29:38.780606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.780643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 584.246 ms, result 0 00:31:29.165 [2024-11-27 19:29:38.780675] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:29.165 [2024-11-27 19:29:38.780683] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:29.165 [2024-11-27 19:29:38.780690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.780697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:29.165 [2024-11-27 19:29:38.780703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1206.287 ms 00:31:29.165 [2024-11-27 19:29:38.780709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.780732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.780741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:29.165 [2024-11-27 19:29:38.780747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:29.165 [2024-11-27 19:29:38.780753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.789333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:29.165 [2024-11-27 19:29:38.789412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.789421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:29.165 [2024-11-27 19:29:38.789427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.647 ms 00:31:29.165 [2024-11-27 19:29:38.789433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.165 [2024-11-27 19:29:38.789948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.165 [2024-11-27 19:29:38.789969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:29.165 [2024-11-27 19:29:38.789976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.467 ms 00:31:29.165 [2024-11-27 19:29:38.789983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.791699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.791800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:29.427 [2024-11-27 19:29:38.791812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.704 ms 00:31:29.427 [2024-11-27 19:29:38.791818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.791850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.791857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:29.427 [2024-11-27 19:29:38.791867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:29.427 [2024-11-27 19:29:38.791873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.791951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.791958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:29.427 [2024-11-27 19:29:38.791964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:29.427 [2024-11-27 19:29:38.791970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.791987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.791994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:29.427 [2024-11-27 19:29:38.791999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:29.427 [2024-11-27 19:29:38.792005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.792027] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:29.427 [2024-11-27 19:29:38.792034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.792040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:29.427 [2024-11-27 19:29:38.792046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:29.427 [2024-11-27 19:29:38.792052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.792089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.427 [2024-11-27 19:29:38.792096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:29.427 [2024-11-27 19:29:38.792102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:29.427 [2024-11-27 19:29:38.792107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.427 [2024-11-27 19:29:38.792783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1435.698 ms, result 0 00:31:29.427 [2024-11-27 19:29:38.857705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.427 [2024-11-27 19:29:38.873697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:29.427 [2024-11-27 19:29:38.881808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:29.427 Validate MD5 checksum, iteration 1 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:29.427 19:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:29.427 [2024-11-27 19:29:38.974832] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:29.427 [2024-11-27 19:29:38.975155] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83766 ] 00:31:29.689 [2024-11-27 19:29:39.136943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.689 [2024-11-27 19:29:39.231391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.602  [2024-11-27T19:29:41.499Z] Copying: 663/1024 [MB] (663 MBps) [2024-11-27T19:29:44.141Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:31:34.506 00:31:34.506 19:29:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:34.506 19:29:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=94b2fedd43cdeec18c04632d846a976b 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 94b2fedd43cdeec18c04632d846a976b != \9\4\b\2\f\e\d\d\4\3\c\d\e\e\c\1\8\c\0\4\6\3\2\d\8\4\6\a\9\7\6\b ]] 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:36.421 Validate MD5 checksum, iteration 2 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:36.421 19:29:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:36.421 [2024-11-27 19:29:45.871210] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:36.421 [2024-11-27 19:29:45.871441] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83844 ] 00:31:36.421 [2024-11-27 19:29:46.025487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.683 [2024-11-27 19:29:46.129190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.066  [2024-11-27T19:29:48.270Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-27T19:29:50.178Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:31:40.543 00:31:40.543 19:29:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:40.543 19:29:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=15d35a652fc95fc3aeb6d25cefc32c9a 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 15d35a652fc95fc3aeb6d25cefc32c9a != \1\5\d\3\5\a\6\5\2\f\c\9\5\f\c\3\a\e\b\6\d\2\5\c\e\f\c\3\2\c\9\a ]] 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:42.455 19:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83732 ]] 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83732 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83732 ']' 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83732 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83732 00:31:42.455 killing process with pid 83732 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83732' 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83732 00:31:42.455 19:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83732 00:31:43.025 [2024-11-27 19:29:52.592066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:43.025 [2024-11-27 19:29:52.604403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.025 [2024-11-27 19:29:52.604438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:43.025 [2024-11-27 19:29:52.604449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:43.025 [2024-11-27 19:29:52.604456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.604472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:43.026 [2024-11-27 19:29:52.606484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.606508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:43.026 [2024-11-27 19:29:52.606520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.002 ms 00:31:43.026 [2024-11-27 19:29:52.606527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.606690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.606697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:43.026 [2024-11-27 19:29:52.606703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:31:43.026 [2024-11-27 19:29:52.606710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.607872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.607985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:43.026 [2024-11-27 19:29:52.607998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:31:43.026 [2024-11-27 19:29:52.608008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.608877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.608893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:43.026 [2024-11-27 19:29:52.608901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:31:43.026 [2024-11-27 19:29:52.608908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.616223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.616248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:43.026 [2024-11-27 19:29:52.616260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.281 ms 00:31:43.026 [2024-11-27 19:29:52.616266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.620338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.620364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:43.026 [2024-11-27 19:29:52.620372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.046 ms 00:31:43.026 [2024-11-27 19:29:52.620379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.620436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.620443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:43.026 [2024-11-27 19:29:52.620450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:43.026 [2024-11-27 19:29:52.620459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.627658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.627682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:43.026 [2024-11-27 19:29:52.627689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.186 ms 00:31:43.026 [2024-11-27 19:29:52.627694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.635057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.635247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:43.026 [2024-11-27 19:29:52.635259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.338 ms 00:31:43.026 [2024-11-27 19:29:52.635264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.642420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.642511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:43.026 [2024-11-27 19:29:52.642572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.100 ms 00:31:43.026 [2024-11-27 19:29:52.642591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.649569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.649656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:43.026 [2024-11-27 19:29:52.649701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.928 ms 00:31:43.026 [2024-11-27 19:29:52.649718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.649747] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:43.026 [2024-11-27 19:29:52.649768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.026 [2024-11-27 19:29:52.649792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.026 [2024-11-27 19:29:52.649814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:43.026 [2024-11-27 19:29:52.649836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.649886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-27 19:29:52.650509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:43.026 [2024-11-27 19:29:52.650525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d3964679-f9fd-4b43-a6c9-6913abbbe4be 00:31:43.026 [2024-11-27 19:29:52.650548] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:43.026 [2024-11-27 19:29:52.650562] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:43.026 [2024-11-27 19:29:52.650597] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:43.026 [2024-11-27 19:29:52.650614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:43.026 [2024-11-27 19:29:52.650629] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:43.026 [2024-11-27 19:29:52.650643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:43.026 [2024-11-27 19:29:52.650662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:43.026 [2024-11-27 19:29:52.650675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:43.026 [2024-11-27 19:29:52.650707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:43.026 [2024-11-27 19:29:52.650724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.650824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:43.026 [2024-11-27 19:29:52.650842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:31:43.026 [2024-11-27 19:29:52.650857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.026 [2024-11-27 19:29:52.660242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.026 [2024-11-27 19:29:52.660330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:43.288 [2024-11-27 19:29:52.660387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.353 ms 00:31:43.288 [2024-11-27 19:29:52.660404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.660684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.288 [2024-11-27 19:29:52.660709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:43.288 [2024-11-27 19:29:52.660776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:31:43.288 [2024-11-27 19:29:52.660795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.693276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.693365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:43.288 [2024-11-27 19:29:52.693404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.693425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.693457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.693473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:43.288 [2024-11-27 19:29:52.693487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.693501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.693571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.693591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:43.288 [2024-11-27 19:29:52.693606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.693657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.693686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.693701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:43.288 [2024-11-27 19:29:52.693716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.693730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.751844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.751950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:43.288 [2024-11-27 19:29:52.751990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.752006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.800619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.800729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:43.288 [2024-11-27 19:29:52.800766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.800783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.800844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.800862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:43.288 [2024-11-27 19:29:52.800878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.800893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.800944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.800972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:43.288 [2024-11-27 19:29:52.801019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.801036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.801119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.801149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:43.288 [2024-11-27 19:29:52.801164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.801180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.801213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.801230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:43.288 [2024-11-27 19:29:52.801248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.801289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.801327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.801344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:43.288 [2024-11-27 19:29:52.801358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.801373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.801440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.288 [2024-11-27 19:29:52.801463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:43.288 [2024-11-27 19:29:52.801479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.288 [2024-11-27 19:29:52.801493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.288 [2024-11-27 19:29:52.801591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.165 ms, result 0 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:43.860 Remove shared memory files 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83495 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:43.860 ************************************ 00:31:43.860 END TEST ftl_upgrade_shutdown 00:31:43.860 ************************************ 00:31:43.860 00:31:43.860 real 1m25.227s 00:31:43.860 user 1m56.168s 00:31:43.860 sys 0m20.209s 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.860 19:29:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:44.120 19:29:53 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:44.120 19:29:53 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:44.120 19:29:53 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:44.120 19:29:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.120 19:29:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:44.120 ************************************ 00:31:44.120 START TEST ftl_restore_fast 00:31:44.120 ************************************ 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:44.120 * Looking for test storage... 00:31:44.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:44.120 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:44.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.121 --rc genhtml_branch_coverage=1 00:31:44.121 --rc genhtml_function_coverage=1 00:31:44.121 --rc genhtml_legend=1 00:31:44.121 --rc geninfo_all_blocks=1 00:31:44.121 --rc geninfo_unexecuted_blocks=1 00:31:44.121 00:31:44.121 ' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:44.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.121 --rc genhtml_branch_coverage=1 00:31:44.121 --rc genhtml_function_coverage=1 00:31:44.121 --rc genhtml_legend=1 00:31:44.121 --rc geninfo_all_blocks=1 00:31:44.121 --rc geninfo_unexecuted_blocks=1 00:31:44.121 00:31:44.121 ' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:44.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.121 --rc genhtml_branch_coverage=1 00:31:44.121 --rc genhtml_function_coverage=1 00:31:44.121 --rc genhtml_legend=1 00:31:44.121 --rc geninfo_all_blocks=1 00:31:44.121 --rc geninfo_unexecuted_blocks=1 00:31:44.121 00:31:44.121 ' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:44.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.121 --rc genhtml_branch_coverage=1 00:31:44.121 --rc genhtml_function_coverage=1 00:31:44.121 --rc genhtml_legend=1 00:31:44.121 --rc geninfo_all_blocks=1 00:31:44.121 --rc geninfo_unexecuted_blocks=1 00:31:44.121 00:31:44.121 ' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Rnu0xHZnAw 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84002 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84002 00:31:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84002 ']' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:44.121 19:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:44.381 [2024-11-27 19:29:53.761359] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:31:44.381 [2024-11-27 19:29:53.761467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84002 ] 00:31:44.381 [2024-11-27 19:29:53.918849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.381 [2024-11-27 19:29:54.014727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:45.326 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:45.587 19:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:45.587 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:45.587 { 00:31:45.587 "name": "nvme0n1", 00:31:45.587 "aliases": [ 00:31:45.587 "e5e68aa7-ef8a-43e1-8f81-a0af9b80ee7b" 00:31:45.587 ], 00:31:45.587 "product_name": "NVMe disk", 00:31:45.587 "block_size": 4096, 00:31:45.587 "num_blocks": 1310720, 00:31:45.587 "uuid": "e5e68aa7-ef8a-43e1-8f81-a0af9b80ee7b", 00:31:45.587 "numa_id": -1, 00:31:45.587 "assigned_rate_limits": { 00:31:45.587 "rw_ios_per_sec": 0, 00:31:45.587 "rw_mbytes_per_sec": 0, 00:31:45.587 "r_mbytes_per_sec": 0, 00:31:45.587 "w_mbytes_per_sec": 0 00:31:45.587 }, 00:31:45.587 "claimed": true, 00:31:45.587 "claim_type": "read_many_write_one", 00:31:45.587 "zoned": false, 00:31:45.588 "supported_io_types": { 00:31:45.588 "read": true, 00:31:45.588 "write": true, 00:31:45.588 "unmap": true, 00:31:45.588 "flush": true, 00:31:45.588 "reset": true, 00:31:45.588 "nvme_admin": true, 00:31:45.588 "nvme_io": true, 00:31:45.588 "nvme_io_md": false, 00:31:45.588 "write_zeroes": true, 00:31:45.588 "zcopy": false, 00:31:45.588 "get_zone_info": false, 00:31:45.588 "zone_management": false, 00:31:45.588 "zone_append": false, 00:31:45.588 "compare": true, 00:31:45.588 "compare_and_write": false, 00:31:45.588 "abort": true, 00:31:45.588 "seek_hole": false, 00:31:45.588 "seek_data": false, 00:31:45.588 "copy": true, 00:31:45.588 "nvme_iov_md": false 00:31:45.588 }, 00:31:45.588 "driver_specific": { 00:31:45.588 "nvme": [ 00:31:45.588 { 00:31:45.588 "pci_address": "0000:00:11.0", 00:31:45.588 "trid": { 00:31:45.588 "trtype": "PCIe", 00:31:45.588 "traddr": "0000:00:11.0" 00:31:45.588 }, 00:31:45.588 "ctrlr_data": { 00:31:45.588 "cntlid": 0, 00:31:45.588 "vendor_id": "0x1b36", 00:31:45.588 "model_number": "QEMU NVMe Ctrl", 00:31:45.588 "serial_number": "12341", 00:31:45.588 "firmware_revision": "8.0.0", 00:31:45.588 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:45.588 "oacs": { 00:31:45.588 "security": 0, 00:31:45.588 "format": 1, 00:31:45.588 "firmware": 0, 00:31:45.588 "ns_manage": 1 00:31:45.588 }, 00:31:45.588 "multi_ctrlr": false, 00:31:45.588 "ana_reporting": false 00:31:45.588 }, 00:31:45.588 "vs": { 00:31:45.588 "nvme_version": "1.4" 00:31:45.588 }, 00:31:45.588 "ns_data": { 00:31:45.588 "id": 1, 00:31:45.588 "can_share": false 00:31:45.588 } 00:31:45.588 } 00:31:45.588 ], 00:31:45.588 "mp_policy": "active_passive" 00:31:45.588 } 00:31:45.588 } 00:31:45.588 ]' 00:31:45.588 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:45.588 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:45.588 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=7dd8a1b5-96d2-469c-b831-d1898f82e58f 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:45.849 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7dd8a1b5-96d2-469c-b831-d1898f82e58f 00:31:46.111 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:46.372 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=b1023053-c6a6-4957-b674-0c3131445d0c 00:31:46.372 19:29:55 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b1023053-c6a6-4957-b674-0c3131445d0c 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:46.634 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:46.895 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:46.895 { 00:31:46.895 "name": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:46.895 "aliases": [ 00:31:46.895 "lvs/nvme0n1p0" 00:31:46.895 ], 00:31:46.895 "product_name": "Logical Volume", 00:31:46.895 "block_size": 4096, 00:31:46.895 "num_blocks": 26476544, 00:31:46.895 "uuid": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:46.895 "assigned_rate_limits": { 00:31:46.895 "rw_ios_per_sec": 0, 00:31:46.895 "rw_mbytes_per_sec": 0, 00:31:46.895 "r_mbytes_per_sec": 0, 00:31:46.895 "w_mbytes_per_sec": 0 00:31:46.895 }, 00:31:46.895 "claimed": false, 00:31:46.895 "zoned": false, 00:31:46.895 "supported_io_types": { 00:31:46.895 "read": true, 00:31:46.895 "write": true, 00:31:46.895 "unmap": true, 00:31:46.895 "flush": false, 00:31:46.895 "reset": true, 00:31:46.895 "nvme_admin": false, 00:31:46.895 "nvme_io": false, 00:31:46.895 "nvme_io_md": false, 00:31:46.895 "write_zeroes": true, 00:31:46.895 "zcopy": false, 00:31:46.895 "get_zone_info": false, 00:31:46.895 "zone_management": false, 00:31:46.895 "zone_append": false, 00:31:46.895 "compare": false, 00:31:46.895 "compare_and_write": false, 00:31:46.895 "abort": false, 00:31:46.895 "seek_hole": true, 00:31:46.895 "seek_data": true, 00:31:46.895 "copy": false, 00:31:46.896 "nvme_iov_md": false 00:31:46.896 }, 00:31:46.896 "driver_specific": { 00:31:46.896 "lvol": { 00:31:46.896 "lvol_store_uuid": "b1023053-c6a6-4957-b674-0c3131445d0c", 00:31:46.896 "base_bdev": "nvme0n1", 00:31:46.896 "thin_provision": true, 00:31:46.896 "num_allocated_clusters": 0, 00:31:46.896 "snapshot": false, 00:31:46.896 "clone": false, 00:31:46.896 "esnap_clone": false 00:31:46.896 } 00:31:46.896 } 00:31:46.896 } 00:31:46.896 ]' 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:46.896 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:47.167 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:47.428 { 00:31:47.428 "name": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:47.428 "aliases": [ 00:31:47.428 "lvs/nvme0n1p0" 00:31:47.428 ], 00:31:47.428 "product_name": "Logical Volume", 00:31:47.428 "block_size": 4096, 00:31:47.428 "num_blocks": 26476544, 00:31:47.428 "uuid": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:47.428 "assigned_rate_limits": { 00:31:47.428 "rw_ios_per_sec": 0, 00:31:47.428 "rw_mbytes_per_sec": 0, 00:31:47.428 "r_mbytes_per_sec": 0, 00:31:47.428 "w_mbytes_per_sec": 0 00:31:47.428 }, 00:31:47.428 "claimed": false, 00:31:47.428 "zoned": false, 00:31:47.428 "supported_io_types": { 00:31:47.428 "read": true, 00:31:47.428 "write": true, 00:31:47.428 "unmap": true, 00:31:47.428 "flush": false, 00:31:47.428 "reset": true, 00:31:47.428 "nvme_admin": false, 00:31:47.428 "nvme_io": false, 00:31:47.428 "nvme_io_md": false, 00:31:47.428 "write_zeroes": true, 00:31:47.428 "zcopy": false, 00:31:47.428 "get_zone_info": false, 00:31:47.428 "zone_management": false, 00:31:47.428 "zone_append": false, 00:31:47.428 "compare": false, 00:31:47.428 "compare_and_write": false, 00:31:47.428 "abort": false, 00:31:47.428 "seek_hole": true, 00:31:47.428 "seek_data": true, 00:31:47.428 "copy": false, 00:31:47.428 "nvme_iov_md": false 00:31:47.428 }, 00:31:47.428 "driver_specific": { 00:31:47.428 "lvol": { 00:31:47.428 "lvol_store_uuid": "b1023053-c6a6-4957-b674-0c3131445d0c", 00:31:47.428 "base_bdev": "nvme0n1", 00:31:47.428 "thin_provision": true, 00:31:47.428 "num_allocated_clusters": 0, 00:31:47.428 "snapshot": false, 00:31:47.428 "clone": false, 00:31:47.428 "esnap_clone": false 00:31:47.428 } 00:31:47.428 } 00:31:47.428 } 00:31:47.428 ]' 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:47.428 19:29:56 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fbb02798-e5a3-4ae7-ae25-91b31221064f 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:47.690 { 00:31:47.690 "name": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:47.690 "aliases": [ 00:31:47.690 "lvs/nvme0n1p0" 00:31:47.690 ], 00:31:47.690 "product_name": "Logical Volume", 00:31:47.690 "block_size": 4096, 00:31:47.690 "num_blocks": 26476544, 00:31:47.690 "uuid": "fbb02798-e5a3-4ae7-ae25-91b31221064f", 00:31:47.690 "assigned_rate_limits": { 00:31:47.690 "rw_ios_per_sec": 0, 00:31:47.690 "rw_mbytes_per_sec": 0, 00:31:47.690 "r_mbytes_per_sec": 0, 00:31:47.690 "w_mbytes_per_sec": 0 00:31:47.690 }, 00:31:47.690 "claimed": false, 00:31:47.690 "zoned": false, 00:31:47.690 "supported_io_types": { 00:31:47.690 "read": true, 00:31:47.690 "write": true, 00:31:47.690 "unmap": true, 00:31:47.690 "flush": false, 00:31:47.690 "reset": true, 00:31:47.690 "nvme_admin": false, 00:31:47.690 "nvme_io": false, 00:31:47.690 "nvme_io_md": false, 00:31:47.690 "write_zeroes": true, 00:31:47.690 "zcopy": false, 00:31:47.690 "get_zone_info": false, 00:31:47.690 "zone_management": false, 00:31:47.690 "zone_append": false, 00:31:47.690 "compare": false, 00:31:47.690 "compare_and_write": false, 00:31:47.690 "abort": false, 00:31:47.690 "seek_hole": true, 00:31:47.690 "seek_data": true, 00:31:47.690 "copy": false, 00:31:47.690 "nvme_iov_md": false 00:31:47.690 }, 00:31:47.690 "driver_specific": { 00:31:47.690 "lvol": { 00:31:47.690 "lvol_store_uuid": "b1023053-c6a6-4957-b674-0c3131445d0c", 00:31:47.690 "base_bdev": "nvme0n1", 00:31:47.690 "thin_provision": true, 00:31:47.690 "num_allocated_clusters": 0, 00:31:47.690 "snapshot": false, 00:31:47.690 "clone": false, 00:31:47.690 "esnap_clone": false 00:31:47.690 } 00:31:47.690 } 00:31:47.690 } 00:31:47.690 ]' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fbb02798-e5a3-4ae7-ae25-91b31221064f --l2p_dram_limit 10' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:47.690 19:29:57 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fbb02798-e5a3-4ae7-ae25-91b31221064f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:47.952 [2024-11-27 19:29:57.492950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.492990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:47.952 [2024-11-27 19:29:57.493003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:47.952 [2024-11-27 19:29:57.493009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.493061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.493070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:47.952 [2024-11-27 19:29:57.493077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:47.952 [2024-11-27 19:29:57.493083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.493099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:47.952 [2024-11-27 19:29:57.493664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:47.952 [2024-11-27 19:29:57.493685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.493691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:47.952 [2024-11-27 19:29:57.493699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:31:47.952 [2024-11-27 19:29:57.493704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.493758] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:31:47.952 [2024-11-27 19:29:57.494688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.494713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:47.952 [2024-11-27 19:29:57.494720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:47.952 [2024-11-27 19:29:57.494727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.499372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.499404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:47.952 [2024-11-27 19:29:57.499412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:31:47.952 [2024-11-27 19:29:57.499420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.499485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.499494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:47.952 [2024-11-27 19:29:57.499501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:47.952 [2024-11-27 19:29:57.499510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.499539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.499548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:47.952 [2024-11-27 19:29:57.499556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:47.952 [2024-11-27 19:29:57.499563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.499579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:47.952 [2024-11-27 19:29:57.502400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.502425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:47.952 [2024-11-27 19:29:57.502433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:31:47.952 [2024-11-27 19:29:57.502440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.502466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.502473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:47.952 [2024-11-27 19:29:57.502480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:47.952 [2024-11-27 19:29:57.502486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.502506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:47.952 [2024-11-27 19:29:57.502611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:47.952 [2024-11-27 19:29:57.502623] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:47.952 [2024-11-27 19:29:57.502631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:47.952 [2024-11-27 19:29:57.502640] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:47.952 [2024-11-27 19:29:57.502647] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:47.952 [2024-11-27 19:29:57.502655] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:47.952 [2024-11-27 19:29:57.502662] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:47.952 [2024-11-27 19:29:57.502669] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:47.952 [2024-11-27 19:29:57.502675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:47.952 [2024-11-27 19:29:57.502682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.502692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:47.952 [2024-11-27 19:29:57.502700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:31:47.952 [2024-11-27 19:29:57.502705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.502770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.952 [2024-11-27 19:29:57.502776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:47.952 [2024-11-27 19:29:57.502783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:47.952 [2024-11-27 19:29:57.502788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.952 [2024-11-27 19:29:57.502867] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:47.952 [2024-11-27 19:29:57.502875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:47.952 [2024-11-27 19:29:57.502882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:47.952 [2024-11-27 19:29:57.502888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.952 [2024-11-27 19:29:57.502896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:47.952 [2024-11-27 19:29:57.502901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:47.952 [2024-11-27 19:29:57.502907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:47.952 [2024-11-27 19:29:57.502912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:47.953 [2024-11-27 19:29:57.502919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:47.953 [2024-11-27 19:29:57.502924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:47.953 [2024-11-27 19:29:57.502931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:47.953 [2024-11-27 19:29:57.502936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:47.953 [2024-11-27 19:29:57.502942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:47.953 [2024-11-27 19:29:57.502947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:47.953 [2024-11-27 19:29:57.502953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:47.953 [2024-11-27 19:29:57.502959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.953 [2024-11-27 19:29:57.502968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:47.953 [2024-11-27 19:29:57.502973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:47.953 [2024-11-27 19:29:57.502979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.953 [2024-11-27 19:29:57.502985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:47.953 [2024-11-27 19:29:57.502991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:47.953 [2024-11-27 19:29:57.502996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:47.953 [2024-11-27 19:29:57.503007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:47.953 [2024-11-27 19:29:57.503024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:47.953 [2024-11-27 19:29:57.503040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:47.953 [2024-11-27 19:29:57.503059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:47.953 [2024-11-27 19:29:57.503070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:47.953 [2024-11-27 19:29:57.503074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:47.953 [2024-11-27 19:29:57.503080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:47.953 [2024-11-27 19:29:57.503086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:47.953 [2024-11-27 19:29:57.503092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:47.953 [2024-11-27 19:29:57.503097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:47.953 [2024-11-27 19:29:57.503117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:47.953 [2024-11-27 19:29:57.503139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:47.953 [2024-11-27 19:29:57.503153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:47.953 [2024-11-27 19:29:57.503159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:47.953 [2024-11-27 19:29:57.503172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:47.953 [2024-11-27 19:29:57.503180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:47.953 [2024-11-27 19:29:57.503185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:47.953 [2024-11-27 19:29:57.503192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:47.953 [2024-11-27 19:29:57.503197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:47.953 [2024-11-27 19:29:57.503203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:47.953 [2024-11-27 19:29:57.503211] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:47.953 [2024-11-27 19:29:57.503221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:47.953 [2024-11-27 19:29:57.503234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:47.953 [2024-11-27 19:29:57.503240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:47.953 [2024-11-27 19:29:57.503247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:47.953 [2024-11-27 19:29:57.503252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:47.953 [2024-11-27 19:29:57.503258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:47.953 [2024-11-27 19:29:57.503264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:47.953 [2024-11-27 19:29:57.503270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:47.953 [2024-11-27 19:29:57.503275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:47.953 [2024-11-27 19:29:57.503284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:47.953 [2024-11-27 19:29:57.503321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:47.953 [2024-11-27 19:29:57.503328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:47.953 [2024-11-27 19:29:57.503341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:47.953 [2024-11-27 19:29:57.503347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:47.953 [2024-11-27 19:29:57.503353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:47.953 [2024-11-27 19:29:57.503359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:47.953 [2024-11-27 19:29:57.503365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:47.953 [2024-11-27 19:29:57.503371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:31:47.953 [2024-11-27 19:29:57.503378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:47.953 [2024-11-27 19:29:57.503409] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:47.953 [2024-11-27 19:29:57.503419] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:51.256 [2024-11-27 19:30:00.486143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.486221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:51.256 [2024-11-27 19:30:00.486237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2982.721 ms 00:31:51.256 [2024-11-27 19:30:00.486249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.515482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.515544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:51.256 [2024-11-27 19:30:00.515558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.007 ms 00:31:51.256 [2024-11-27 19:30:00.515569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.515705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.515719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:51.256 [2024-11-27 19:30:00.515729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:51.256 [2024-11-27 19:30:00.515745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.550036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.550092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:51.256 [2024-11-27 19:30:00.550105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.255 ms 00:31:51.256 [2024-11-27 19:30:00.550117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.550177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.550189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:51.256 [2024-11-27 19:30:00.550199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:51.256 [2024-11-27 19:30:00.550217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.550829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.550859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:51.256 [2024-11-27 19:30:00.550869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:31:51.256 [2024-11-27 19:30:00.550880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.550998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.551012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:51.256 [2024-11-27 19:30:00.551021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:31:51.256 [2024-11-27 19:30:00.551034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.568456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.568509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:51.256 [2024-11-27 19:30:00.568520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.404 ms 00:31:51.256 [2024-11-27 19:30:00.568531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.596179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:51.256 [2024-11-27 19:30:00.600220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.600267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:51.256 [2024-11-27 19:30:00.600282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.598 ms 00:31:51.256 [2024-11-27 19:30:00.600291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.699876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.699941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:51.256 [2024-11-27 19:30:00.699958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.533 ms 00:31:51.256 [2024-11-27 19:30:00.699968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.700194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.700207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:51.256 [2024-11-27 19:30:00.700223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:51.256 [2024-11-27 19:30:00.700232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.726660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.256 [2024-11-27 19:30:00.726716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:51.256 [2024-11-27 19:30:00.726732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.367 ms 00:31:51.256 [2024-11-27 19:30:00.726741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.256 [2024-11-27 19:30:00.751709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.257 [2024-11-27 19:30:00.751759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:51.257 [2024-11-27 19:30:00.751776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.907 ms 00:31:51.257 [2024-11-27 19:30:00.751784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.257 [2024-11-27 19:30:00.752449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.257 [2024-11-27 19:30:00.752462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:51.257 [2024-11-27 19:30:00.752477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:31:51.257 [2024-11-27 19:30:00.752485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.257 [2024-11-27 19:30:00.841019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.257 [2024-11-27 19:30:00.841075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:51.257 [2024-11-27 19:30:00.841095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.463 ms 00:31:51.257 [2024-11-27 19:30:00.841104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.257 [2024-11-27 19:30:00.868735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.257 [2024-11-27 19:30:00.868787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:51.257 [2024-11-27 19:30:00.868802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.532 ms 00:31:51.257 [2024-11-27 19:30:00.868812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.518 [2024-11-27 19:30:00.894671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.518 [2024-11-27 19:30:00.894722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:51.518 [2024-11-27 19:30:00.894737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.803 ms 00:31:51.518 [2024-11-27 19:30:00.894744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.518 [2024-11-27 19:30:00.921285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.518 [2024-11-27 19:30:00.921342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:51.518 [2024-11-27 19:30:00.921358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:31:51.518 [2024-11-27 19:30:00.921366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.518 [2024-11-27 19:30:00.921423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.518 [2024-11-27 19:30:00.921433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:51.518 [2024-11-27 19:30:00.921449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:51.518 [2024-11-27 19:30:00.921457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.518 [2024-11-27 19:30:00.921553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.518 [2024-11-27 19:30:00.921567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:51.518 [2024-11-27 19:30:00.921578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:51.518 [2024-11-27 19:30:00.921586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.518 [2024-11-27 19:30:00.922976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3429.526 ms, result 0 00:31:51.518 { 00:31:51.518 "name": "ftl0", 00:31:51.518 "uuid": "4c01b3f1-2509-48ab-8fa9-84eb47f7df7f" 00:31:51.518 } 00:31:51.518 19:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:51.518 19:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:51.779 19:30:01 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:51.779 19:30:01 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:51.779 [2024-11-27 19:30:01.370184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.370266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:51.779 [2024-11-27 19:30:01.370283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:51.779 [2024-11-27 19:30:01.370294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.779 [2024-11-27 19:30:01.370321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:51.779 [2024-11-27 19:30:01.373404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.373452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:51.779 [2024-11-27 19:30:01.373467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:31:51.779 [2024-11-27 19:30:01.373475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.779 [2024-11-27 19:30:01.373780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.373792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:51.779 [2024-11-27 19:30:01.373805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:31:51.779 [2024-11-27 19:30:01.373814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.779 [2024-11-27 19:30:01.377066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.377094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:51.779 [2024-11-27 19:30:01.377107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:31:51.779 [2024-11-27 19:30:01.377115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.779 [2024-11-27 19:30:01.383264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.383309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:51.779 [2024-11-27 19:30:01.383323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:31:51.779 [2024-11-27 19:30:01.383332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.779 [2024-11-27 19:30:01.410257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.779 [2024-11-27 19:30:01.410312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:51.779 [2024-11-27 19:30:01.410329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.852 ms 00:31:51.779 [2024-11-27 19:30:01.410337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.428331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.428384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:52.043 [2024-11-27 19:30:01.428400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.932 ms 00:31:52.043 [2024-11-27 19:30:01.428408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.428584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.428596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:52.043 [2024-11-27 19:30:01.428609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:31:52.043 [2024-11-27 19:30:01.428617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.454778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.454828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:52.043 [2024-11-27 19:30:01.454844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:31:52.043 [2024-11-27 19:30:01.454852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.480241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.480292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:52.043 [2024-11-27 19:30:01.480306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:31:52.043 [2024-11-27 19:30:01.480313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.505242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.505291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:52.043 [2024-11-27 19:30:01.505305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.870 ms 00:31:52.043 [2024-11-27 19:30:01.505312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.529933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.043 [2024-11-27 19:30:01.529981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:52.043 [2024-11-27 19:30:01.529995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.518 ms 00:31:52.043 [2024-11-27 19:30:01.530002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.043 [2024-11-27 19:30:01.530053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:52.043 [2024-11-27 19:30:01.530068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:52.043 [2024-11-27 19:30:01.530346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.530988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:52.044 [2024-11-27 19:30:01.531004] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:52.044 [2024-11-27 19:30:01.531014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:31:52.044 [2024-11-27 19:30:01.531023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:52.044 [2024-11-27 19:30:01.531036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:52.044 [2024-11-27 19:30:01.531047] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:52.044 [2024-11-27 19:30:01.531056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:52.044 [2024-11-27 19:30:01.531063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:52.044 [2024-11-27 19:30:01.531073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:52.044 [2024-11-27 19:30:01.531080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:52.044 [2024-11-27 19:30:01.531089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:52.044 [2024-11-27 19:30:01.531095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:52.044 [2024-11-27 19:30:01.531105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.044 [2024-11-27 19:30:01.531201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:52.044 [2024-11-27 19:30:01.531213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:31:52.044 [2024-11-27 19:30:01.531224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.544900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.045 [2024-11-27 19:30:01.544949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:52.045 [2024-11-27 19:30:01.544964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:31:52.045 [2024-11-27 19:30:01.544972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.545412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.045 [2024-11-27 19:30:01.545429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:52.045 [2024-11-27 19:30:01.545445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:31:52.045 [2024-11-27 19:30:01.545452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.592064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.045 [2024-11-27 19:30:01.592120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:52.045 [2024-11-27 19:30:01.592145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.045 [2024-11-27 19:30:01.592154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.592226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.045 [2024-11-27 19:30:01.592236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:52.045 [2024-11-27 19:30:01.592249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.045 [2024-11-27 19:30:01.592258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.592357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.045 [2024-11-27 19:30:01.592369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:52.045 [2024-11-27 19:30:01.592379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.045 [2024-11-27 19:30:01.592387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.045 [2024-11-27 19:30:01.592411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.045 [2024-11-27 19:30:01.592419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:52.045 [2024-11-27 19:30:01.592429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.045 [2024-11-27 19:30:01.592439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.306 [2024-11-27 19:30:01.677016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.306 [2024-11-27 19:30:01.677086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:52.306 [2024-11-27 19:30:01.677101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.306 [2024-11-27 19:30:01.677110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.306 [2024-11-27 19:30:01.746061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.306 [2024-11-27 19:30:01.746121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:52.306 [2024-11-27 19:30:01.746156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.306 [2024-11-27 19:30:01.746165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.306 [2024-11-27 19:30:01.746255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.306 [2024-11-27 19:30:01.746266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:52.306 [2024-11-27 19:30:01.746278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.306 [2024-11-27 19:30:01.746286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.306 [2024-11-27 19:30:01.746362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.306 [2024-11-27 19:30:01.746372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:52.306 [2024-11-27 19:30:01.746383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.306 [2024-11-27 19:30:01.746391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.306 [2024-11-27 19:30:01.746494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.306 [2024-11-27 19:30:01.746504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:52.306 [2024-11-27 19:30:01.746515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.306 [2024-11-27 19:30:01.746523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.307 [2024-11-27 19:30:01.746559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.307 [2024-11-27 19:30:01.746569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:52.307 [2024-11-27 19:30:01.746579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.307 [2024-11-27 19:30:01.746587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.307 [2024-11-27 19:30:01.746634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.307 [2024-11-27 19:30:01.746643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:52.307 [2024-11-27 19:30:01.746653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.307 [2024-11-27 19:30:01.746662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.307 [2024-11-27 19:30:01.746714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.307 [2024-11-27 19:30:01.746725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:52.307 [2024-11-27 19:30:01.746736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.307 [2024-11-27 19:30:01.746744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.307 [2024-11-27 19:30:01.746897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.675 ms, result 0 00:31:52.307 true 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84002 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84002 ']' 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84002 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84002 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.307 killing process with pid 84002 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84002' 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84002 00:31:52.307 19:30:01 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84002 00:31:58.894 19:30:07 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:02.194 262144+0 records in 00:32:02.194 262144+0 records out 00:32:02.194 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.59233 s, 299 MB/s 00:32:02.194 19:30:11 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:04.110 19:30:13 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:04.110 [2024-11-27 19:30:13.318628] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:04.110 [2024-11-27 19:30:13.318720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84237 ] 00:32:04.110 [2024-11-27 19:30:13.473804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.110 [2024-11-27 19:30:13.573175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.371 [2024-11-27 19:30:13.865388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.371 [2024-11-27 19:30:13.865474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.634 [2024-11-27 19:30:14.027006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.027074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:04.634 [2024-11-27 19:30:14.027090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:04.634 [2024-11-27 19:30:14.027099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.027183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.027198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.634 [2024-11-27 19:30:14.027207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:04.634 [2024-11-27 19:30:14.027215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.027237] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:04.634 [2024-11-27 19:30:14.027989] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:04.634 [2024-11-27 19:30:14.028008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.028016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.634 [2024-11-27 19:30:14.028026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:32:04.634 [2024-11-27 19:30:14.028033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.029845] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:04.634 [2024-11-27 19:30:14.044405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.044450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:04.634 [2024-11-27 19:30:14.044464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.563 ms 00:32:04.634 [2024-11-27 19:30:14.044473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.044549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.044560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:04.634 [2024-11-27 19:30:14.044569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:04.634 [2024-11-27 19:30:14.044577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.052429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.052473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.634 [2024-11-27 19:30:14.052483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.777 ms 00:32:04.634 [2024-11-27 19:30:14.052497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.052575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.052585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.634 [2024-11-27 19:30:14.052593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:04.634 [2024-11-27 19:30:14.052601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.052645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.052656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:04.634 [2024-11-27 19:30:14.052665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:04.634 [2024-11-27 19:30:14.052672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.052699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:04.634 [2024-11-27 19:30:14.056629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.056671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.634 [2024-11-27 19:30:14.056685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.935 ms 00:32:04.634 [2024-11-27 19:30:14.056693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.056726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.056735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:04.634 [2024-11-27 19:30:14.056744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:04.634 [2024-11-27 19:30:14.056751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.056799] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:04.634 [2024-11-27 19:30:14.056822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:04.634 [2024-11-27 19:30:14.056859] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:04.634 [2024-11-27 19:30:14.056878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:04.634 [2024-11-27 19:30:14.056983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:04.634 [2024-11-27 19:30:14.056993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:04.634 [2024-11-27 19:30:14.057004] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:04.634 [2024-11-27 19:30:14.057015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:04.634 [2024-11-27 19:30:14.057024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:04.634 [2024-11-27 19:30:14.057032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:04.634 [2024-11-27 19:30:14.057040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:04.634 [2024-11-27 19:30:14.057051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:04.634 [2024-11-27 19:30:14.057059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:04.634 [2024-11-27 19:30:14.057066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.057074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:04.634 [2024-11-27 19:30:14.057082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:04.634 [2024-11-27 19:30:14.057089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.634 [2024-11-27 19:30:14.057188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.634 [2024-11-27 19:30:14.057198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:04.635 [2024-11-27 19:30:14.057205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:04.635 [2024-11-27 19:30:14.057213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.057320] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:04.635 [2024-11-27 19:30:14.057331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:04.635 [2024-11-27 19:30:14.057340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:04.635 [2024-11-27 19:30:14.057364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:04.635 [2024-11-27 19:30:14.057384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.635 [2024-11-27 19:30:14.057397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:04.635 [2024-11-27 19:30:14.057404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:04.635 [2024-11-27 19:30:14.057413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.635 [2024-11-27 19:30:14.057427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:04.635 [2024-11-27 19:30:14.057434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:04.635 [2024-11-27 19:30:14.057440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:04.635 [2024-11-27 19:30:14.057453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:04.635 [2024-11-27 19:30:14.057474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:04.635 [2024-11-27 19:30:14.057494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:04.635 [2024-11-27 19:30:14.057514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:04.635 [2024-11-27 19:30:14.057534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:04.635 [2024-11-27 19:30:14.057553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.635 [2024-11-27 19:30:14.057565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:04.635 [2024-11-27 19:30:14.057571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:04.635 [2024-11-27 19:30:14.057578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.635 [2024-11-27 19:30:14.057585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:04.635 [2024-11-27 19:30:14.057591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:04.635 [2024-11-27 19:30:14.057597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:04.635 [2024-11-27 19:30:14.057609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:04.635 [2024-11-27 19:30:14.057615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057622] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:04.635 [2024-11-27 19:30:14.057635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:04.635 [2024-11-27 19:30:14.057644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.635 [2024-11-27 19:30:14.057659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:04.635 [2024-11-27 19:30:14.057666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:04.635 [2024-11-27 19:30:14.057672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:04.635 [2024-11-27 19:30:14.057679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:04.635 [2024-11-27 19:30:14.057686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:04.635 [2024-11-27 19:30:14.057693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:04.635 [2024-11-27 19:30:14.057702] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:04.635 [2024-11-27 19:30:14.057711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:04.635 [2024-11-27 19:30:14.057729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:04.635 [2024-11-27 19:30:14.057736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:04.635 [2024-11-27 19:30:14.057743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:04.635 [2024-11-27 19:30:14.057751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:04.635 [2024-11-27 19:30:14.057759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:04.635 [2024-11-27 19:30:14.057766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:04.635 [2024-11-27 19:30:14.057774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:04.635 [2024-11-27 19:30:14.057781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:04.635 [2024-11-27 19:30:14.057787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:04.635 [2024-11-27 19:30:14.057822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:04.635 [2024-11-27 19:30:14.057830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:04.635 [2024-11-27 19:30:14.057845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:04.635 [2024-11-27 19:30:14.057852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:04.635 [2024-11-27 19:30:14.057859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:04.635 [2024-11-27 19:30:14.057866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.057873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:04.635 [2024-11-27 19:30:14.057882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:32:04.635 [2024-11-27 19:30:14.057889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.089603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.089654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.635 [2024-11-27 19:30:14.089666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.671 ms 00:32:04.635 [2024-11-27 19:30:14.089678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.089770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.089779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:04.635 [2024-11-27 19:30:14.089789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:04.635 [2024-11-27 19:30:14.089797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.138639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.138692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.635 [2024-11-27 19:30:14.138705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.780 ms 00:32:04.635 [2024-11-27 19:30:14.138713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.138761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.138771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.635 [2024-11-27 19:30:14.138784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.635 [2024-11-27 19:30:14.138792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.635 [2024-11-27 19:30:14.139451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.635 [2024-11-27 19:30:14.139474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.636 [2024-11-27 19:30:14.139485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:32:04.636 [2024-11-27 19:30:14.139493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.139655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.139665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.636 [2024-11-27 19:30:14.139679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:04.636 [2024-11-27 19:30:14.139686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.155618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.155664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.636 [2024-11-27 19:30:14.155675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.912 ms 00:32:04.636 [2024-11-27 19:30:14.155683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.169945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:04.636 [2024-11-27 19:30:14.169996] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:04.636 [2024-11-27 19:30:14.170010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.170018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:04.636 [2024-11-27 19:30:14.170027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.220 ms 00:32:04.636 [2024-11-27 19:30:14.170035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.195399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.195455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:04.636 [2024-11-27 19:30:14.195466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.311 ms 00:32:04.636 [2024-11-27 19:30:14.195475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.208238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.208287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:04.636 [2024-11-27 19:30:14.208298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.713 ms 00:32:04.636 [2024-11-27 19:30:14.208306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.220806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.220853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:04.636 [2024-11-27 19:30:14.220865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.454 ms 00:32:04.636 [2024-11-27 19:30:14.220872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.636 [2024-11-27 19:30:14.221548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.636 [2024-11-27 19:30:14.221573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:04.636 [2024-11-27 19:30:14.221584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:32:04.636 [2024-11-27 19:30:14.221596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.288523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.288589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:04.898 [2024-11-27 19:30:14.288605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.906 ms 00:32:04.898 [2024-11-27 19:30:14.288622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.299747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:04.898 [2024-11-27 19:30:14.302952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.302997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:04.898 [2024-11-27 19:30:14.303009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.275 ms 00:32:04.898 [2024-11-27 19:30:14.303018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.303106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.303148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:04.898 [2024-11-27 19:30:14.303163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:04.898 [2024-11-27 19:30:14.303172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.303249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.303260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:04.898 [2024-11-27 19:30:14.303269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:04.898 [2024-11-27 19:30:14.303277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.303301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.303311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:04.898 [2024-11-27 19:30:14.303319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.898 [2024-11-27 19:30:14.303327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.303360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:04.898 [2024-11-27 19:30:14.303373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.303381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:04.898 [2024-11-27 19:30:14.303390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:04.898 [2024-11-27 19:30:14.303398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.328901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.328950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:04.898 [2024-11-27 19:30:14.328963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.482 ms 00:32:04.898 [2024-11-27 19:30:14.328977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.329062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.898 [2024-11-27 19:30:14.329073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:04.898 [2024-11-27 19:30:14.329082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:04.898 [2024-11-27 19:30:14.329090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.898 [2024-11-27 19:30:14.330347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.830 ms, result 0 00:32:05.868  [2024-11-27T19:30:16.446Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T19:30:17.392Z] Copying: 26/1024 [MB] (14 MBps) [2024-11-27T19:30:18.775Z] Copying: 56/1024 [MB] (30 MBps) [2024-11-27T19:30:19.348Z] Copying: 92/1024 [MB] (36 MBps) [2024-11-27T19:30:20.735Z] Copying: 124/1024 [MB] (31 MBps) [2024-11-27T19:30:21.679Z] Copying: 143/1024 [MB] (19 MBps) [2024-11-27T19:30:22.625Z] Copying: 162/1024 [MB] (18 MBps) [2024-11-27T19:30:23.570Z] Copying: 183/1024 [MB] (21 MBps) [2024-11-27T19:30:24.513Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-27T19:30:25.457Z] Copying: 222/1024 [MB] (12 MBps) [2024-11-27T19:30:26.402Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-27T19:30:27.345Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-27T19:30:28.736Z] Copying: 268/1024 [MB] (18 MBps) [2024-11-27T19:30:29.679Z] Copying: 297/1024 [MB] (28 MBps) [2024-11-27T19:30:30.621Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-27T19:30:31.566Z] Copying: 326/1024 [MB] (18 MBps) [2024-11-27T19:30:32.511Z] Copying: 362/1024 [MB] (35 MBps) [2024-11-27T19:30:33.455Z] Copying: 382/1024 [MB] (20 MBps) [2024-11-27T19:30:34.398Z] Copying: 403/1024 [MB] (20 MBps) [2024-11-27T19:30:35.783Z] Copying: 432/1024 [MB] (29 MBps) [2024-11-27T19:30:36.354Z] Copying: 451/1024 [MB] (19 MBps) [2024-11-27T19:30:37.742Z] Copying: 471/1024 [MB] (19 MBps) [2024-11-27T19:30:38.688Z] Copying: 491/1024 [MB] (20 MBps) [2024-11-27T19:30:39.628Z] Copying: 504/1024 [MB] (12 MBps) [2024-11-27T19:30:40.571Z] Copying: 536/1024 [MB] (32 MBps) [2024-11-27T19:30:41.513Z] Copying: 567/1024 [MB] (30 MBps) [2024-11-27T19:30:42.458Z] Copying: 581/1024 [MB] (14 MBps) [2024-11-27T19:30:43.402Z] Copying: 593/1024 [MB] (12 MBps) [2024-11-27T19:30:44.348Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-27T19:30:45.764Z] Copying: 624/1024 [MB] (18 MBps) [2024-11-27T19:30:46.708Z] Copying: 638/1024 [MB] (13 MBps) [2024-11-27T19:30:47.650Z] Copying: 658/1024 [MB] (20 MBps) [2024-11-27T19:30:48.597Z] Copying: 673/1024 [MB] (15 MBps) [2024-11-27T19:30:49.542Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-27T19:30:50.487Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-27T19:30:51.430Z] Copying: 720/1024 [MB] (25 MBps) [2024-11-27T19:30:52.376Z] Copying: 739/1024 [MB] (18 MBps) [2024-11-27T19:30:53.767Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-27T19:30:54.711Z] Copying: 770/1024 [MB] (13 MBps) [2024-11-27T19:30:55.658Z] Copying: 787/1024 [MB] (17 MBps) [2024-11-27T19:30:56.604Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-27T19:30:57.548Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-27T19:30:58.493Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-27T19:30:59.439Z] Copying: 837/1024 [MB] (14 MBps) [2024-11-27T19:31:00.382Z] Copying: 857/1024 [MB] (19 MBps) [2024-11-27T19:31:01.772Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-27T19:31:02.345Z] Copying: 885/1024 [MB] (13 MBps) [2024-11-27T19:31:03.734Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-27T19:31:04.676Z] Copying: 924/1024 [MB] (16 MBps) [2024-11-27T19:31:05.622Z] Copying: 943/1024 [MB] (18 MBps) [2024-11-27T19:31:06.566Z] Copying: 957/1024 [MB] (14 MBps) [2024-11-27T19:31:07.510Z] Copying: 976/1024 [MB] (18 MBps) [2024-11-27T19:31:08.458Z] Copying: 1008/1024 [MB] (32 MBps) [2024-11-27T19:31:08.458Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 19:31:08.336098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.823 [2024-11-27 19:31:08.336179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:58.823 [2024-11-27 19:31:08.336195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:58.823 [2024-11-27 19:31:08.336204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.823 [2024-11-27 19:31:08.336228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:58.823 [2024-11-27 19:31:08.339331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.823 [2024-11-27 19:31:08.339381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:58.823 [2024-11-27 19:31:08.339407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:32:58.823 [2024-11-27 19:31:08.339419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.823 [2024-11-27 19:31:08.342309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.823 [2024-11-27 19:31:08.342375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:58.823 [2024-11-27 19:31:08.342392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.850 ms 00:32:58.823 [2024-11-27 19:31:08.342403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.823 [2024-11-27 19:31:08.342442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.823 [2024-11-27 19:31:08.342456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:58.823 [2024-11-27 19:31:08.342469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:58.823 [2024-11-27 19:31:08.342481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.823 [2024-11-27 19:31:08.342558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.823 [2024-11-27 19:31:08.342575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:58.823 [2024-11-27 19:31:08.342589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:58.823 [2024-11-27 19:31:08.342609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.823 [2024-11-27 19:31:08.342631] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:58.823 [2024-11-27 19:31:08.342650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.342995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:58.823 [2024-11-27 19:31:08.343204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.343991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:58.824 [2024-11-27 19:31:08.344091] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:58.824 [2024-11-27 19:31:08.344105] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:32:58.824 [2024-11-27 19:31:08.344118] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:58.824 [2024-11-27 19:31:08.344144] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:58.824 [2024-11-27 19:31:08.344157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:58.824 [2024-11-27 19:31:08.344182] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:58.824 [2024-11-27 19:31:08.344194] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:58.824 [2024-11-27 19:31:08.344208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:58.824 [2024-11-27 19:31:08.344225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:58.824 [2024-11-27 19:31:08.344235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:58.824 [2024-11-27 19:31:08.344247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:58.824 [2024-11-27 19:31:08.344265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.824 [2024-11-27 19:31:08.344277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:58.824 [2024-11-27 19:31:08.344292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:32:58.824 [2024-11-27 19:31:08.344304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.824 [2024-11-27 19:31:08.358134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.824 [2024-11-27 19:31:08.358201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:58.824 [2024-11-27 19:31:08.358219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.789 ms 00:32:58.824 [2024-11-27 19:31:08.358230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.824 [2024-11-27 19:31:08.358664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.824 [2024-11-27 19:31:08.358701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:58.824 [2024-11-27 19:31:08.358718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:32:58.824 [2024-11-27 19:31:08.358729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.824 [2024-11-27 19:31:08.395991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.824 [2024-11-27 19:31:08.396055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:58.824 [2024-11-27 19:31:08.396072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.824 [2024-11-27 19:31:08.396084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.824 [2024-11-27 19:31:08.396185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.824 [2024-11-27 19:31:08.396201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:58.824 [2024-11-27 19:31:08.396215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.824 [2024-11-27 19:31:08.396227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.824 [2024-11-27 19:31:08.396308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.825 [2024-11-27 19:31:08.396353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:58.825 [2024-11-27 19:31:08.396367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.825 [2024-11-27 19:31:08.396381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.825 [2024-11-27 19:31:08.396412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.825 [2024-11-27 19:31:08.396431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:58.825 [2024-11-27 19:31:08.396456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.825 [2024-11-27 19:31:08.396472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.480978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.481054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.087 [2024-11-27 19:31:08.481072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.481084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.551324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.551385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.087 [2024-11-27 19:31:08.551403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.551416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.551520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.551538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:59.087 [2024-11-27 19:31:08.551558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.551571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.551629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.551646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:59.087 [2024-11-27 19:31:08.551661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.551674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.551795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.551823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:59.087 [2024-11-27 19:31:08.551850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.551873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.551920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.551941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:59.087 [2024-11-27 19:31:08.551954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.551967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.552022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.552035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:59.087 [2024-11-27 19:31:08.552046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.552061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.552152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.087 [2024-11-27 19:31:08.552170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:59.087 [2024-11-27 19:31:08.552183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.087 [2024-11-27 19:31:08.552196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.087 [2024-11-27 19:31:08.552379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 216.216 ms, result 0 00:32:59.659 00:32:59.659 00:32:59.659 19:31:09 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:59.920 [2024-11-27 19:31:09.363829] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:59.920 [2024-11-27 19:31:09.363977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84797 ] 00:32:59.920 [2024-11-27 19:31:09.534527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.181 [2024-11-27 19:31:09.650561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.443 [2024-11-27 19:31:09.946411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:00.443 [2024-11-27 19:31:09.946494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:00.707 [2024-11-27 19:31:10.107767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.707 [2024-11-27 19:31:10.107842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:00.707 [2024-11-27 19:31:10.107858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:00.707 [2024-11-27 19:31:10.107868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.707 [2024-11-27 19:31:10.107924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.707 [2024-11-27 19:31:10.107939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:00.707 [2024-11-27 19:31:10.107947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:00.707 [2024-11-27 19:31:10.107955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.707 [2024-11-27 19:31:10.107978] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:00.707 [2024-11-27 19:31:10.108778] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:00.707 [2024-11-27 19:31:10.108812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.707 [2024-11-27 19:31:10.108821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:00.707 [2024-11-27 19:31:10.108830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:33:00.707 [2024-11-27 19:31:10.108838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.707 [2024-11-27 19:31:10.109199] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:00.708 [2024-11-27 19:31:10.109284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.109341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:00.708 [2024-11-27 19:31:10.109351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:33:00.708 [2024-11-27 19:31:10.109359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.109412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.109423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:00.708 [2024-11-27 19:31:10.109431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:00.708 [2024-11-27 19:31:10.109439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.109715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.109735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:00.708 [2024-11-27 19:31:10.109744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:33:00.708 [2024-11-27 19:31:10.109752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.109824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.109834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:00.708 [2024-11-27 19:31:10.109842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:00.708 [2024-11-27 19:31:10.109851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.109875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.109884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:00.708 [2024-11-27 19:31:10.109895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:00.708 [2024-11-27 19:31:10.109903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.109925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:00.708 [2024-11-27 19:31:10.114256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.114303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:00.708 [2024-11-27 19:31:10.114314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:33:00.708 [2024-11-27 19:31:10.114321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.114356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.114365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:00.708 [2024-11-27 19:31:10.114373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:00.708 [2024-11-27 19:31:10.114381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.114441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:00.708 [2024-11-27 19:31:10.114466] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:00.708 [2024-11-27 19:31:10.114507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:00.708 [2024-11-27 19:31:10.114524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:00.708 [2024-11-27 19:31:10.114629] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:00.708 [2024-11-27 19:31:10.114639] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:00.708 [2024-11-27 19:31:10.114650] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:00.708 [2024-11-27 19:31:10.114662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:00.708 [2024-11-27 19:31:10.114672] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:00.708 [2024-11-27 19:31:10.114683] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:00.708 [2024-11-27 19:31:10.114691] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:00.708 [2024-11-27 19:31:10.114699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:00.708 [2024-11-27 19:31:10.114707] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:00.708 [2024-11-27 19:31:10.114716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.114724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:00.708 [2024-11-27 19:31:10.114731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:33:00.708 [2024-11-27 19:31:10.114739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.114821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.708 [2024-11-27 19:31:10.114841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:00.708 [2024-11-27 19:31:10.114850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:00.708 [2024-11-27 19:31:10.114860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.708 [2024-11-27 19:31:10.114965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:00.708 [2024-11-27 19:31:10.114977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:00.708 [2024-11-27 19:31:10.114985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:00.708 [2024-11-27 19:31:10.114993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:00.708 [2024-11-27 19:31:10.115010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:00.708 [2024-11-27 19:31:10.115031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:00.708 [2024-11-27 19:31:10.115045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:00.708 [2024-11-27 19:31:10.115053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:00.708 [2024-11-27 19:31:10.115060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:00.708 [2024-11-27 19:31:10.115068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:00.708 [2024-11-27 19:31:10.115075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:00.708 [2024-11-27 19:31:10.115089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:00.708 [2024-11-27 19:31:10.115103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:00.708 [2024-11-27 19:31:10.115167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:00.708 [2024-11-27 19:31:10.115189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:00.708 [2024-11-27 19:31:10.115210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:00.708 [2024-11-27 19:31:10.115232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:00.708 [2024-11-27 19:31:10.115252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:00.708 [2024-11-27 19:31:10.115266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:00.708 [2024-11-27 19:31:10.115273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:00.708 [2024-11-27 19:31:10.115279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:00.708 [2024-11-27 19:31:10.115288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:00.708 [2024-11-27 19:31:10.115296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:00.708 [2024-11-27 19:31:10.115302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:00.708 [2024-11-27 19:31:10.115316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:00.708 [2024-11-27 19:31:10.115323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:00.708 [2024-11-27 19:31:10.115340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:00.708 [2024-11-27 19:31:10.115347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:00.708 [2024-11-27 19:31:10.115354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:00.708 [2024-11-27 19:31:10.115366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:00.708 [2024-11-27 19:31:10.115373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:00.708 [2024-11-27 19:31:10.115380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:00.708 [2024-11-27 19:31:10.115387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:00.708 [2024-11-27 19:31:10.115393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:00.708 [2024-11-27 19:31:10.115400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:00.709 [2024-11-27 19:31:10.115409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:00.709 [2024-11-27 19:31:10.115418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:00.709 [2024-11-27 19:31:10.115435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:00.709 [2024-11-27 19:31:10.115442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:00.709 [2024-11-27 19:31:10.115449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:00.709 [2024-11-27 19:31:10.115455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:00.709 [2024-11-27 19:31:10.115462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:00.709 [2024-11-27 19:31:10.115469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:00.709 [2024-11-27 19:31:10.115476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:00.709 [2024-11-27 19:31:10.115483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:00.709 [2024-11-27 19:31:10.115491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:00.709 [2024-11-27 19:31:10.115527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:00.709 [2024-11-27 19:31:10.115536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:00.709 [2024-11-27 19:31:10.115551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:00.709 [2024-11-27 19:31:10.115558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:00.709 [2024-11-27 19:31:10.115565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:00.709 [2024-11-27 19:31:10.115574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.115592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:00.709 [2024-11-27 19:31:10.115601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:33:00.709 [2024-11-27 19:31:10.115609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.143455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.143497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:00.709 [2024-11-27 19:31:10.143509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.801 ms 00:33:00.709 [2024-11-27 19:31:10.143518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.143606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.143616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:00.709 [2024-11-27 19:31:10.143628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:00.709 [2024-11-27 19:31:10.143636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.185944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.185985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:00.709 [2024-11-27 19:31:10.185998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.252 ms 00:33:00.709 [2024-11-27 19:31:10.186006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.186045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.186055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:00.709 [2024-11-27 19:31:10.186064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:00.709 [2024-11-27 19:31:10.186071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.186177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.186189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:00.709 [2024-11-27 19:31:10.186197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:00.709 [2024-11-27 19:31:10.186205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.186316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.186326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:00.709 [2024-11-27 19:31:10.186334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:33:00.709 [2024-11-27 19:31:10.186341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.199494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.199526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:00.709 [2024-11-27 19:31:10.199536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.136 ms 00:33:00.709 [2024-11-27 19:31:10.199543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.199649] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:00.709 [2024-11-27 19:31:10.199662] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:00.709 [2024-11-27 19:31:10.199671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.199681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:00.709 [2024-11-27 19:31:10.199690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:00.709 [2024-11-27 19:31:10.199697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.211940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.211971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:00.709 [2024-11-27 19:31:10.211981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:33:00.709 [2024-11-27 19:31:10.211989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.212116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.212136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:00.709 [2024-11-27 19:31:10.212144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:33:00.709 [2024-11-27 19:31:10.212154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.212195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.212204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:00.709 [2024-11-27 19:31:10.212219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:00.709 [2024-11-27 19:31:10.212226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.212770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.212788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:00.709 [2024-11-27 19:31:10.212797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:33:00.709 [2024-11-27 19:31:10.212803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.212822] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:00.709 [2024-11-27 19:31:10.212832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.212840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:00.709 [2024-11-27 19:31:10.212847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:00.709 [2024-11-27 19:31:10.212854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.223899] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:00.709 [2024-11-27 19:31:10.224028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.224038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:00.709 [2024-11-27 19:31:10.224047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 00:33:00.709 [2024-11-27 19:31:10.224054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.226157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.226181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:00.709 [2024-11-27 19:31:10.226190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:33:00.709 [2024-11-27 19:31:10.226197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.226270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.226279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:00.709 [2024-11-27 19:31:10.226287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:00.709 [2024-11-27 19:31:10.226294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.226315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.226327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:00.709 [2024-11-27 19:31:10.226334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:00.709 [2024-11-27 19:31:10.226341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.709 [2024-11-27 19:31:10.226367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:00.709 [2024-11-27 19:31:10.226376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.709 [2024-11-27 19:31:10.226383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:00.709 [2024-11-27 19:31:10.226391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:00.709 [2024-11-27 19:31:10.226397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.710 [2024-11-27 19:31:10.250594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.710 [2024-11-27 19:31:10.250629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:00.710 [2024-11-27 19:31:10.250640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.178 ms 00:33:00.710 [2024-11-27 19:31:10.250647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.710 [2024-11-27 19:31:10.250716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.710 [2024-11-27 19:31:10.250726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:00.710 [2024-11-27 19:31:10.250734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:00.710 [2024-11-27 19:31:10.250741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.710 [2024-11-27 19:31:10.251694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.539 ms, result 0 00:33:02.099  [2024-11-27T19:31:12.682Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-27T19:31:13.687Z] Copying: 23/1024 [MB] (12 MBps) [2024-11-27T19:31:14.632Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-27T19:31:15.579Z] Copying: 51/1024 [MB] (15 MBps) [2024-11-27T19:31:16.525Z] Copying: 69/1024 [MB] (18 MBps) [2024-11-27T19:31:17.470Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-27T19:31:18.860Z] Copying: 103/1024 [MB] (10 MBps) [2024-11-27T19:31:19.431Z] Copying: 113/1024 [MB] (10 MBps) [2024-11-27T19:31:20.817Z] Copying: 133/1024 [MB] (19 MBps) [2024-11-27T19:31:21.761Z] Copying: 147/1024 [MB] (13 MBps) [2024-11-27T19:31:22.705Z] Copying: 161/1024 [MB] (14 MBps) [2024-11-27T19:31:23.649Z] Copying: 171/1024 [MB] (10 MBps) [2024-11-27T19:31:24.593Z] Copying: 187/1024 [MB] (15 MBps) [2024-11-27T19:31:25.538Z] Copying: 201/1024 [MB] (14 MBps) [2024-11-27T19:31:26.483Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-27T19:31:27.870Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-27T19:31:28.442Z] Copying: 243/1024 [MB] (18 MBps) [2024-11-27T19:31:29.828Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-27T19:31:30.773Z] Copying: 271/1024 [MB] (17 MBps) [2024-11-27T19:31:31.718Z] Copying: 286/1024 [MB] (15 MBps) [2024-11-27T19:31:32.662Z] Copying: 309/1024 [MB] (22 MBps) [2024-11-27T19:31:33.607Z] Copying: 327/1024 [MB] (18 MBps) [2024-11-27T19:31:34.551Z] Copying: 346/1024 [MB] (18 MBps) [2024-11-27T19:31:35.497Z] Copying: 362/1024 [MB] (16 MBps) [2024-11-27T19:31:36.442Z] Copying: 384/1024 [MB] (22 MBps) [2024-11-27T19:31:37.830Z] Copying: 400/1024 [MB] (15 MBps) [2024-11-27T19:31:38.772Z] Copying: 418/1024 [MB] (18 MBps) [2024-11-27T19:31:39.718Z] Copying: 433/1024 [MB] (14 MBps) [2024-11-27T19:31:40.662Z] Copying: 452/1024 [MB] (19 MBps) [2024-11-27T19:31:41.605Z] Copying: 477/1024 [MB] (24 MBps) [2024-11-27T19:31:42.589Z] Copying: 497/1024 [MB] (20 MBps) [2024-11-27T19:31:43.557Z] Copying: 517/1024 [MB] (19 MBps) [2024-11-27T19:31:44.500Z] Copying: 539/1024 [MB] (22 MBps) [2024-11-27T19:31:45.442Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-27T19:31:46.827Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-27T19:31:47.771Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-27T19:31:48.714Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-27T19:31:49.658Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-27T19:31:50.603Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-27T19:31:51.546Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-27T19:31:52.491Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-27T19:31:53.435Z] Copying: 650/1024 [MB] (15 MBps) [2024-11-27T19:31:54.823Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-27T19:31:55.767Z] Copying: 675/1024 [MB] (13 MBps) [2024-11-27T19:31:56.711Z] Copying: 695/1024 [MB] (20 MBps) [2024-11-27T19:31:57.654Z] Copying: 712/1024 [MB] (16 MBps) [2024-11-27T19:31:58.597Z] Copying: 734/1024 [MB] (22 MBps) [2024-11-27T19:31:59.539Z] Copying: 746/1024 [MB] (12 MBps) [2024-11-27T19:32:00.479Z] Copying: 766/1024 [MB] (20 MBps) [2024-11-27T19:32:01.862Z] Copying: 785/1024 [MB] (18 MBps) [2024-11-27T19:32:02.433Z] Copying: 801/1024 [MB] (16 MBps) [2024-11-27T19:32:03.819Z] Copying: 818/1024 [MB] (16 MBps) [2024-11-27T19:32:04.762Z] Copying: 835/1024 [MB] (17 MBps) [2024-11-27T19:32:05.705Z] Copying: 856/1024 [MB] (20 MBps) [2024-11-27T19:32:06.649Z] Copying: 874/1024 [MB] (17 MBps) [2024-11-27T19:32:07.593Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-27T19:32:08.537Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-27T19:32:09.481Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-27T19:32:10.868Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-27T19:32:11.466Z] Copying: 931/1024 [MB] (15 MBps) [2024-11-27T19:32:12.478Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-27T19:32:13.865Z] Copying: 965/1024 [MB] (22 MBps) [2024-11-27T19:32:14.438Z] Copying: 981/1024 [MB] (15 MBps) [2024-11-27T19:32:15.826Z] Copying: 995/1024 [MB] (14 MBps) [2024-11-27T19:32:16.086Z] Copying: 1013/1024 [MB] (17 MBps) [2024-11-27T19:32:16.660Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 19:32:16.402671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.025 [2024-11-27 19:32:16.402760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:07.025 [2024-11-27 19:32:16.402778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:07.025 [2024-11-27 19:32:16.402789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.025 [2024-11-27 19:32:16.402828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:07.025 [2024-11-27 19:32:16.406873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.025 [2024-11-27 19:32:16.406917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:07.025 [2024-11-27 19:32:16.406931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:34:07.025 [2024-11-27 19:32:16.406941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.025 [2024-11-27 19:32:16.407263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.025 [2024-11-27 19:32:16.407277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:07.025 [2024-11-27 19:32:16.407288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:34:07.025 [2024-11-27 19:32:16.407297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.025 [2024-11-27 19:32:16.407337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.025 [2024-11-27 19:32:16.407349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:07.025 [2024-11-27 19:32:16.407359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:07.025 [2024-11-27 19:32:16.407369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.025 [2024-11-27 19:32:16.407435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.025 [2024-11-27 19:32:16.407447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:07.025 [2024-11-27 19:32:16.407457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:07.025 [2024-11-27 19:32:16.407467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.025 [2024-11-27 19:32:16.407817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:07.025 [2024-11-27 19:32:16.407831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.407994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:07.025 [2024-11-27 19:32:16.408092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:07.026 [2024-11-27 19:32:16.408663] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:07.026 [2024-11-27 19:32:16.408671] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:34:07.026 [2024-11-27 19:32:16.408679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:07.026 [2024-11-27 19:32:16.408686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:07.026 [2024-11-27 19:32:16.408693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:07.026 [2024-11-27 19:32:16.408702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:07.026 [2024-11-27 19:32:16.408709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:07.026 [2024-11-27 19:32:16.408717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:07.026 [2024-11-27 19:32:16.408725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:07.026 [2024-11-27 19:32:16.408731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:07.026 [2024-11-27 19:32:16.408738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:07.026 [2024-11-27 19:32:16.408745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.026 [2024-11-27 19:32:16.408754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:07.026 [2024-11-27 19:32:16.408762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:34:07.026 [2024-11-27 19:32:16.408772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.026 [2024-11-27 19:32:16.423901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.026 [2024-11-27 19:32:16.423943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:07.026 [2024-11-27 19:32:16.423955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.112 ms 00:34:07.026 [2024-11-27 19:32:16.423963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.026 [2024-11-27 19:32:16.424371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:07.026 [2024-11-27 19:32:16.424382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:07.026 [2024-11-27 19:32:16.424399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:34:07.026 [2024-11-27 19:32:16.424407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.026 [2024-11-27 19:32:16.461018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.026 [2024-11-27 19:32:16.461082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:07.026 [2024-11-27 19:32:16.461095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.026 [2024-11-27 19:32:16.461105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.026 [2024-11-27 19:32:16.461190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.461201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:07.027 [2024-11-27 19:32:16.461216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.461226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.461302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.461314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:07.027 [2024-11-27 19:32:16.461324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.461333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.461351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.461361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:07.027 [2024-11-27 19:32:16.461370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.461382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.546712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.546768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:07.027 [2024-11-27 19:32:16.546783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.546791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.616832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.616883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:07.027 [2024-11-27 19:32:16.616896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.616910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.616992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:07.027 [2024-11-27 19:32:16.617012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:07.027 [2024-11-27 19:32:16.617083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:07.027 [2024-11-27 19:32:16.617220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:07.027 [2024-11-27 19:32:16.617271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:07.027 [2024-11-27 19:32:16.617340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:07.027 [2024-11-27 19:32:16.617403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:07.027 [2024-11-27 19:32:16.617412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:07.027 [2024-11-27 19:32:16.617420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:07.027 [2024-11-27 19:32:16.617560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 214.859 ms, result 0 00:34:07.972 00:34:07.972 00:34:07.972 19:32:17 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:10.519 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:10.519 19:32:19 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:10.519 [2024-11-27 19:32:19.718403] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:10.519 [2024-11-27 19:32:19.719048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85496 ] 00:34:10.519 [2024-11-27 19:32:19.882544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.519 [2024-11-27 19:32:19.981732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.780 [2024-11-27 19:32:20.270180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:10.780 [2024-11-27 19:32:20.270260] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:11.043 [2024-11-27 19:32:20.430987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.431047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:11.043 [2024-11-27 19:32:20.431063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:11.043 [2024-11-27 19:32:20.431072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.431143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.431157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:11.043 [2024-11-27 19:32:20.431167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:11.043 [2024-11-27 19:32:20.431190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.431211] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:11.043 [2024-11-27 19:32:20.432046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:11.043 [2024-11-27 19:32:20.432079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.432088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:11.043 [2024-11-27 19:32:20.432098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:34:11.043 [2024-11-27 19:32:20.432105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.432444] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:11.043 [2024-11-27 19:32:20.432472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.432485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:11.043 [2024-11-27 19:32:20.432495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:11.043 [2024-11-27 19:32:20.432503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.432553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.432562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:11.043 [2024-11-27 19:32:20.432570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:11.043 [2024-11-27 19:32:20.432577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.432847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.432859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:11.043 [2024-11-27 19:32:20.432867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:34:11.043 [2024-11-27 19:32:20.432876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.432945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.432955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:11.043 [2024-11-27 19:32:20.432963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:11.043 [2024-11-27 19:32:20.432970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.432993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.433001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:11.043 [2024-11-27 19:32:20.433012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:11.043 [2024-11-27 19:32:20.433020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.433037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:11.043 [2024-11-27 19:32:20.437285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.437321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:11.043 [2024-11-27 19:32:20.437332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:34:11.043 [2024-11-27 19:32:20.437340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.437380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.437389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:11.043 [2024-11-27 19:32:20.437397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:11.043 [2024-11-27 19:32:20.437405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.437458] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:11.043 [2024-11-27 19:32:20.437482] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:11.043 [2024-11-27 19:32:20.437520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:11.043 [2024-11-27 19:32:20.437536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:11.043 [2024-11-27 19:32:20.437641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:11.043 [2024-11-27 19:32:20.437656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:11.043 [2024-11-27 19:32:20.437672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:11.043 [2024-11-27 19:32:20.437686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:11.043 [2024-11-27 19:32:20.437700] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:11.043 [2024-11-27 19:32:20.437716] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:11.043 [2024-11-27 19:32:20.437727] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:11.043 [2024-11-27 19:32:20.437737] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:11.043 [2024-11-27 19:32:20.437748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:11.043 [2024-11-27 19:32:20.437762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.437773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:11.043 [2024-11-27 19:32:20.437785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:34:11.043 [2024-11-27 19:32:20.437796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.437884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.043 [2024-11-27 19:32:20.437892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:11.043 [2024-11-27 19:32:20.437900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:11.043 [2024-11-27 19:32:20.437913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.043 [2024-11-27 19:32:20.438047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:11.043 [2024-11-27 19:32:20.438062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:11.043 [2024-11-27 19:32:20.438071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:11.043 [2024-11-27 19:32:20.438079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.043 [2024-11-27 19:32:20.438092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:11.043 [2024-11-27 19:32:20.438100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:11.043 [2024-11-27 19:32:20.438107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:11.043 [2024-11-27 19:32:20.438114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:11.043 [2024-11-27 19:32:20.438136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:11.043 [2024-11-27 19:32:20.438144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:11.043 [2024-11-27 19:32:20.438152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:11.043 [2024-11-27 19:32:20.438159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:11.044 [2024-11-27 19:32:20.438166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:11.044 [2024-11-27 19:32:20.438173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:11.044 [2024-11-27 19:32:20.438180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:11.044 [2024-11-27 19:32:20.438198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:11.044 [2024-11-27 19:32:20.438220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:11.044 [2024-11-27 19:32:20.438252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:11.044 [2024-11-27 19:32:20.438284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:11.044 [2024-11-27 19:32:20.438307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:11.044 [2024-11-27 19:32:20.438327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:11.044 [2024-11-27 19:32:20.438354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:11.044 [2024-11-27 19:32:20.438374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:11.044 [2024-11-27 19:32:20.438385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:11.044 [2024-11-27 19:32:20.438397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:11.044 [2024-11-27 19:32:20.438407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:11.044 [2024-11-27 19:32:20.438418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:11.044 [2024-11-27 19:32:20.438428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:11.044 [2024-11-27 19:32:20.438448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:11.044 [2024-11-27 19:32:20.438461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438468] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:11.044 [2024-11-27 19:32:20.438476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:11.044 [2024-11-27 19:32:20.438484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:11.044 [2024-11-27 19:32:20.438506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:11.044 [2024-11-27 19:32:20.438513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:11.044 [2024-11-27 19:32:20.438520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:11.044 [2024-11-27 19:32:20.438527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:11.044 [2024-11-27 19:32:20.438537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:11.044 [2024-11-27 19:32:20.438548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:11.044 [2024-11-27 19:32:20.438561] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:11.044 [2024-11-27 19:32:20.438576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:11.044 [2024-11-27 19:32:20.438593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:11.044 [2024-11-27 19:32:20.438601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:11.044 [2024-11-27 19:32:20.438608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:11.044 [2024-11-27 19:32:20.438615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:11.044 [2024-11-27 19:32:20.438622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:11.044 [2024-11-27 19:32:20.438629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:11.044 [2024-11-27 19:32:20.438638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:11.044 [2024-11-27 19:32:20.438645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:11.044 [2024-11-27 19:32:20.438652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:11.044 [2024-11-27 19:32:20.438689] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:11.044 [2024-11-27 19:32:20.438698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:11.044 [2024-11-27 19:32:20.438714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:11.044 [2024-11-27 19:32:20.438721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:11.044 [2024-11-27 19:32:20.438729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:11.044 [2024-11-27 19:32:20.438737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.044 [2024-11-27 19:32:20.438745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:11.044 [2024-11-27 19:32:20.438753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:34:11.044 [2024-11-27 19:32:20.438760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.044 [2024-11-27 19:32:20.466178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.044 [2024-11-27 19:32:20.466219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:11.044 [2024-11-27 19:32:20.466231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.360 ms 00:34:11.044 [2024-11-27 19:32:20.466239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.044 [2024-11-27 19:32:20.466324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.044 [2024-11-27 19:32:20.466333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:11.044 [2024-11-27 19:32:20.466345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:34:11.045 [2024-11-27 19:32:20.466353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.512056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.512104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:11.045 [2024-11-27 19:32:20.512117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.648 ms 00:34:11.045 [2024-11-27 19:32:20.512139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.512189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.512200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:11.045 [2024-11-27 19:32:20.512210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:11.045 [2024-11-27 19:32:20.512218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.512328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.512340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:11.045 [2024-11-27 19:32:20.512349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:11.045 [2024-11-27 19:32:20.512357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.512489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.512505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:11.045 [2024-11-27 19:32:20.512517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:34:11.045 [2024-11-27 19:32:20.512528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.528441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.528483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:11.045 [2024-11-27 19:32:20.528494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.885 ms 00:34:11.045 [2024-11-27 19:32:20.528502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.528651] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:11.045 [2024-11-27 19:32:20.528666] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:11.045 [2024-11-27 19:32:20.528678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.528687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:11.045 [2024-11-27 19:32:20.528696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:11.045 [2024-11-27 19:32:20.528703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.541046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.541088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:11.045 [2024-11-27 19:32:20.541101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.325 ms 00:34:11.045 [2024-11-27 19:32:20.541110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.541249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.541259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:11.045 [2024-11-27 19:32:20.541269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:34:11.045 [2024-11-27 19:32:20.541283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.541333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.541342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:11.045 [2024-11-27 19:32:20.541358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:11.045 [2024-11-27 19:32:20.541365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.541951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.541972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:11.045 [2024-11-27 19:32:20.541980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:34:11.045 [2024-11-27 19:32:20.541988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.542009] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:11.045 [2024-11-27 19:32:20.542019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.542027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:11.045 [2024-11-27 19:32:20.542034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:11.045 [2024-11-27 19:32:20.542042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.554720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:11.045 [2024-11-27 19:32:20.554879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.554890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:11.045 [2024-11-27 19:32:20.554901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.820 ms 00:34:11.045 [2024-11-27 19:32:20.554909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.557310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.557346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:11.045 [2024-11-27 19:32:20.557356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:34:11.045 [2024-11-27 19:32:20.557364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.557463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.557473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:11.045 [2024-11-27 19:32:20.557483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:11.045 [2024-11-27 19:32:20.557491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.557516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.557529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:11.045 [2024-11-27 19:32:20.557537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:11.045 [2024-11-27 19:32:20.557545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.557576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:11.045 [2024-11-27 19:32:20.557585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.557593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:11.045 [2024-11-27 19:32:20.557601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:11.045 [2024-11-27 19:32:20.557608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.583990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.584035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:11.045 [2024-11-27 19:32:20.584047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.359 ms 00:34:11.045 [2024-11-27 19:32:20.584056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.584153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.045 [2024-11-27 19:32:20.584165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:11.045 [2024-11-27 19:32:20.584174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:11.045 [2024-11-27 19:32:20.584183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.045 [2024-11-27 19:32:20.585385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.870 ms, result 0 00:34:11.990  [2024-11-27T19:32:23.014Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T19:32:23.957Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-27T19:32:24.898Z] Copying: 60/1024 [MB] (32 MBps) [2024-11-27T19:32:25.843Z] Copying: 83/1024 [MB] (22 MBps) [2024-11-27T19:32:26.789Z] Copying: 97/1024 [MB] (14 MBps) [2024-11-27T19:32:27.733Z] Copying: 111/1024 [MB] (13 MBps) [2024-11-27T19:32:28.677Z] Copying: 125/1024 [MB] (14 MBps) [2024-11-27T19:32:29.620Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-27T19:32:31.006Z] Copying: 183/1024 [MB] (32 MBps) [2024-11-27T19:32:31.947Z] Copying: 215/1024 [MB] (31 MBps) [2024-11-27T19:32:32.891Z] Copying: 247/1024 [MB] (31 MBps) [2024-11-27T19:32:33.834Z] Copying: 264/1024 [MB] (17 MBps) [2024-11-27T19:32:34.779Z] Copying: 281/1024 [MB] (16 MBps) [2024-11-27T19:32:35.723Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-27T19:32:36.668Z] Copying: 313/1024 [MB] (14 MBps) [2024-11-27T19:32:37.614Z] Copying: 329/1024 [MB] (16 MBps) [2024-11-27T19:32:39.001Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-27T19:32:39.944Z] Copying: 355/1024 [MB] (14 MBps) [2024-11-27T19:32:40.607Z] Copying: 371/1024 [MB] (15 MBps) [2024-11-27T19:32:41.995Z] Copying: 381/1024 [MB] (10 MBps) [2024-11-27T19:32:42.941Z] Copying: 404/1024 [MB] (23 MBps) [2024-11-27T19:32:43.885Z] Copying: 418/1024 [MB] (14 MBps) [2024-11-27T19:32:44.832Z] Copying: 429/1024 [MB] (10 MBps) [2024-11-27T19:32:45.776Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-27T19:32:46.721Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-27T19:32:47.665Z] Copying: 473/1024 [MB] (15 MBps) [2024-11-27T19:32:48.609Z] Copying: 490/1024 [MB] (16 MBps) [2024-11-27T19:32:49.998Z] Copying: 512/1024 [MB] (21 MBps) [2024-11-27T19:32:50.944Z] Copying: 529/1024 [MB] (17 MBps) [2024-11-27T19:32:51.891Z] Copying: 547/1024 [MB] (17 MBps) [2024-11-27T19:32:52.834Z] Copying: 562/1024 [MB] (14 MBps) [2024-11-27T19:32:53.777Z] Copying: 577/1024 [MB] (15 MBps) [2024-11-27T19:32:54.724Z] Copying: 595/1024 [MB] (17 MBps) [2024-11-27T19:32:55.670Z] Copying: 615/1024 [MB] (19 MBps) [2024-11-27T19:32:56.617Z] Copying: 630/1024 [MB] (15 MBps) [2024-11-27T19:32:58.016Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-27T19:32:58.960Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-27T19:32:59.905Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-27T19:33:00.851Z] Copying: 692/1024 [MB] (30 MBps) [2024-11-27T19:33:01.795Z] Copying: 714/1024 [MB] (22 MBps) [2024-11-27T19:33:02.740Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-27T19:33:03.686Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-27T19:33:04.631Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-27T19:33:06.019Z] Copying: 755/1024 [MB] (10 MBps) [2024-11-27T19:33:06.965Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-27T19:33:07.909Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-27T19:33:08.853Z] Copying: 791/1024 [MB] (14 MBps) [2024-11-27T19:33:09.849Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-27T19:33:10.791Z] Copying: 817/1024 [MB] (14 MBps) [2024-11-27T19:33:11.736Z] Copying: 832/1024 [MB] (15 MBps) [2024-11-27T19:33:12.679Z] Copying: 849/1024 [MB] (16 MBps) [2024-11-27T19:33:13.637Z] Copying: 871/1024 [MB] (21 MBps) [2024-11-27T19:33:15.027Z] Copying: 888/1024 [MB] (17 MBps) [2024-11-27T19:33:15.600Z] Copying: 902/1024 [MB] (13 MBps) [2024-11-27T19:33:16.988Z] Copying: 919/1024 [MB] (17 MBps) [2024-11-27T19:33:17.932Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-27T19:33:18.921Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-27T19:33:19.862Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-27T19:33:20.804Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-27T19:33:21.750Z] Copying: 994076/1048576 [kB] (10004 kBps) [2024-11-27T19:33:22.695Z] Copying: 980/1024 [MB] (10 MBps) [2024-11-27T19:33:23.638Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-27T19:33:25.027Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-27T19:33:25.973Z] Copying: 1015/1024 [MB] (14 MBps) [2024-11-27T19:33:25.973Z] Copying: 1048564/1048576 [kB] (8476 kBps) [2024-11-27T19:33:25.973Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 19:33:25.619176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.338 [2024-11-27 19:33:25.619225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:16.338 [2024-11-27 19:33:25.619237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:16.338 [2024-11-27 19:33:25.619244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.338 [2024-11-27 19:33:25.620252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:16.338 [2024-11-27 19:33:25.624009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.338 [2024-11-27 19:33:25.624040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:16.338 [2024-11-27 19:33:25.624049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:35:16.338 [2024-11-27 19:33:25.624056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.338 [2024-11-27 19:33:25.631120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.338 [2024-11-27 19:33:25.631155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:16.338 [2024-11-27 19:33:25.631163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:35:16.338 [2024-11-27 19:33:25.631169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.338 [2024-11-27 19:33:25.631188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.338 [2024-11-27 19:33:25.631195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:16.338 [2024-11-27 19:33:25.631202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:16.338 [2024-11-27 19:33:25.631215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.338 [2024-11-27 19:33:25.631252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.338 [2024-11-27 19:33:25.631260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:16.338 [2024-11-27 19:33:25.631266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:16.338 [2024-11-27 19:33:25.631272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.338 [2024-11-27 19:33:25.631282] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:16.338 [2024-11-27 19:33:25.631290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:35:16.338 [2024-11-27 19:33:25.631298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:16.338 [2024-11-27 19:33:25.631664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:16.339 [2024-11-27 19:33:25.631877] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:16.339 [2024-11-27 19:33:25.631883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:35:16.339 [2024-11-27 19:33:25.631889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:35:16.339 [2024-11-27 19:33:25.631894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:35:16.339 [2024-11-27 19:33:25.631900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:35:16.339 [2024-11-27 19:33:25.631905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:35:16.339 [2024-11-27 19:33:25.631913] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:16.339 [2024-11-27 19:33:25.631919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:16.339 [2024-11-27 19:33:25.631924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:16.339 [2024-11-27 19:33:25.631929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:16.339 [2024-11-27 19:33:25.631933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:16.339 [2024-11-27 19:33:25.631939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.339 [2024-11-27 19:33:25.631945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:16.339 [2024-11-27 19:33:25.631951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:35:16.339 [2024-11-27 19:33:25.631956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.641671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.339 [2024-11-27 19:33:25.641696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:16.339 [2024-11-27 19:33:25.641707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.703 ms 00:35:16.339 [2024-11-27 19:33:25.641713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.641977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.339 [2024-11-27 19:33:25.641987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:16.339 [2024-11-27 19:33:25.641994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:35:16.339 [2024-11-27 19:33:25.642000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.667673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.667703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:16.339 [2024-11-27 19:33:25.667710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.667716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.667756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.667763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:16.339 [2024-11-27 19:33:25.667769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.667774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.667807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.667814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:16.339 [2024-11-27 19:33:25.667823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.667828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.667840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.667846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:16.339 [2024-11-27 19:33:25.667852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.667857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.726917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.726966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:16.339 [2024-11-27 19:33:25.726975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.726982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.775976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:16.339 [2024-11-27 19:33:25.776022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:16.339 [2024-11-27 19:33:25.776077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:16.339 [2024-11-27 19:33:25.776147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:16.339 [2024-11-27 19:33:25.776224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:16.339 [2024-11-27 19:33:25.776262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:16.339 [2024-11-27 19:33:25.776307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.339 [2024-11-27 19:33:25.776312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.339 [2024-11-27 19:33:25.776346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.339 [2024-11-27 19:33:25.776353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:16.340 [2024-11-27 19:33:25.776359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.340 [2024-11-27 19:33:25.776365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.340 [2024-11-27 19:33:25.776457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 159.456 ms, result 0 00:35:17.725 00:35:17.725 00:35:17.725 19:33:27 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:17.725 [2024-11-27 19:33:27.106881] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:35:17.725 [2024-11-27 19:33:27.107007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86166 ] 00:35:17.725 [2024-11-27 19:33:27.264468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.725 [2024-11-27 19:33:27.344684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.987 [2024-11-27 19:33:27.554068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:17.987 [2024-11-27 19:33:27.554135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:18.249 [2024-11-27 19:33:27.705329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.249 [2024-11-27 19:33:27.705368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:18.249 [2024-11-27 19:33:27.705377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:18.249 [2024-11-27 19:33:27.705383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.249 [2024-11-27 19:33:27.705417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.249 [2024-11-27 19:33:27.705426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:18.249 [2024-11-27 19:33:27.705433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:18.249 [2024-11-27 19:33:27.705439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.249 [2024-11-27 19:33:27.705451] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:18.249 [2024-11-27 19:33:27.705949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:18.249 [2024-11-27 19:33:27.705961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.249 [2024-11-27 19:33:27.705968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:18.249 [2024-11-27 19:33:27.705974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:35:18.249 [2024-11-27 19:33:27.705979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.249 [2024-11-27 19:33:27.706282] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:18.249 [2024-11-27 19:33:27.706302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.706311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:18.250 [2024-11-27 19:33:27.706318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:35:18.250 [2024-11-27 19:33:27.706323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.706353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.706359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:18.250 [2024-11-27 19:33:27.706366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:35:18.250 [2024-11-27 19:33:27.706371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.706565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.706573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:18.250 [2024-11-27 19:33:27.706579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:35:18.250 [2024-11-27 19:33:27.706585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.706633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.706639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:18.250 [2024-11-27 19:33:27.706645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:18.250 [2024-11-27 19:33:27.706651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.706666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.706672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:18.250 [2024-11-27 19:33:27.706679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:18.250 [2024-11-27 19:33:27.706685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.706697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:18.250 [2024-11-27 19:33:27.709522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.709548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:18.250 [2024-11-27 19:33:27.709555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:35:18.250 [2024-11-27 19:33:27.709560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.709581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.709588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:18.250 [2024-11-27 19:33:27.709594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:18.250 [2024-11-27 19:33:27.709599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.709629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:18.250 [2024-11-27 19:33:27.709644] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:18.250 [2024-11-27 19:33:27.709672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:18.250 [2024-11-27 19:33:27.709683] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:18.250 [2024-11-27 19:33:27.709761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:18.250 [2024-11-27 19:33:27.709768] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:18.250 [2024-11-27 19:33:27.709776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:18.250 [2024-11-27 19:33:27.709784] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:18.250 [2024-11-27 19:33:27.709790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:18.250 [2024-11-27 19:33:27.709798] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:18.250 [2024-11-27 19:33:27.709804] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:18.250 [2024-11-27 19:33:27.709809] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:18.250 [2024-11-27 19:33:27.709814] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:18.250 [2024-11-27 19:33:27.709820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.709825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:18.250 [2024-11-27 19:33:27.709832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:35:18.250 [2024-11-27 19:33:27.709837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.709899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.250 [2024-11-27 19:33:27.709906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:18.250 [2024-11-27 19:33:27.709911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:35:18.250 [2024-11-27 19:33:27.709918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.250 [2024-11-27 19:33:27.709992] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:18.250 [2024-11-27 19:33:27.709999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:18.250 [2024-11-27 19:33:27.710005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:18.250 [2024-11-27 19:33:27.710021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:18.250 [2024-11-27 19:33:27.710037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:18.250 [2024-11-27 19:33:27.710047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:18.250 [2024-11-27 19:33:27.710052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:18.250 [2024-11-27 19:33:27.710058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:18.250 [2024-11-27 19:33:27.710063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:18.250 [2024-11-27 19:33:27.710069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:18.250 [2024-11-27 19:33:27.710077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:18.250 [2024-11-27 19:33:27.710088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:18.250 [2024-11-27 19:33:27.710103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:18.250 [2024-11-27 19:33:27.710118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:18.250 [2024-11-27 19:33:27.710152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:18.250 [2024-11-27 19:33:27.710167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:18.250 [2024-11-27 19:33:27.710183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:18.250 [2024-11-27 19:33:27.710192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:18.250 [2024-11-27 19:33:27.710197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:18.250 [2024-11-27 19:33:27.710202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:18.250 [2024-11-27 19:33:27.710207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:18.250 [2024-11-27 19:33:27.710212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:18.250 [2024-11-27 19:33:27.710217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:18.250 [2024-11-27 19:33:27.710227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:18.250 [2024-11-27 19:33:27.710232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710236] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:18.250 [2024-11-27 19:33:27.710244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:18.250 [2024-11-27 19:33:27.710250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:18.250 [2024-11-27 19:33:27.710263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:18.250 [2024-11-27 19:33:27.710268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:18.250 [2024-11-27 19:33:27.710273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:18.250 [2024-11-27 19:33:27.710278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:18.250 [2024-11-27 19:33:27.710283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:18.250 [2024-11-27 19:33:27.710288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:18.250 [2024-11-27 19:33:27.710294] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:18.250 [2024-11-27 19:33:27.710300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:18.251 [2024-11-27 19:33:27.710312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:18.251 [2024-11-27 19:33:27.710317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:18.251 [2024-11-27 19:33:27.710323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:18.251 [2024-11-27 19:33:27.710328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:18.251 [2024-11-27 19:33:27.710333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:18.251 [2024-11-27 19:33:27.710339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:18.251 [2024-11-27 19:33:27.710344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:18.251 [2024-11-27 19:33:27.710350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:18.251 [2024-11-27 19:33:27.710355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:18.251 [2024-11-27 19:33:27.710381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:18.251 [2024-11-27 19:33:27.710387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:18.251 [2024-11-27 19:33:27.710399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:18.251 [2024-11-27 19:33:27.710404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:18.251 [2024-11-27 19:33:27.710409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:18.251 [2024-11-27 19:33:27.710415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.710421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:18.251 [2024-11-27 19:33:27.710426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:35:18.251 [2024-11-27 19:33:27.710432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.728778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.728805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:18.251 [2024-11-27 19:33:27.728812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:35:18.251 [2024-11-27 19:33:27.728819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.728877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.728884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:18.251 [2024-11-27 19:33:27.728892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:35:18.251 [2024-11-27 19:33:27.728897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.772360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.772393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:18.251 [2024-11-27 19:33:27.772402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.427 ms 00:35:18.251 [2024-11-27 19:33:27.772408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.772433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.772441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:18.251 [2024-11-27 19:33:27.772448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:18.251 [2024-11-27 19:33:27.772453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.772530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.772539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:18.251 [2024-11-27 19:33:27.772545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:18.251 [2024-11-27 19:33:27.772551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.772639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.772647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:18.251 [2024-11-27 19:33:27.772653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:35:18.251 [2024-11-27 19:33:27.772658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.783143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.783171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:18.251 [2024-11-27 19:33:27.783178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.471 ms 00:35:18.251 [2024-11-27 19:33:27.783184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.783271] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:18.251 [2024-11-27 19:33:27.783281] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:18.251 [2024-11-27 19:33:27.783288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.783296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:18.251 [2024-11-27 19:33:27.783303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:18.251 [2024-11-27 19:33:27.783308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.792441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.792467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:18.251 [2024-11-27 19:33:27.792475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.120 ms 00:35:18.251 [2024-11-27 19:33:27.792481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.792566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.792573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:18.251 [2024-11-27 19:33:27.792579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:18.251 [2024-11-27 19:33:27.792588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.792612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.792619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:18.251 [2024-11-27 19:33:27.792625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:18.251 [2024-11-27 19:33:27.792635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.793063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.793071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:18.251 [2024-11-27 19:33:27.793077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:35:18.251 [2024-11-27 19:33:27.793083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.793096] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:18.251 [2024-11-27 19:33:27.793103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.793109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:18.251 [2024-11-27 19:33:27.793114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:18.251 [2024-11-27 19:33:27.793119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.801593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:18.251 [2024-11-27 19:33:27.801700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.801708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:18.251 [2024-11-27 19:33:27.801715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.549 ms 00:35:18.251 [2024-11-27 19:33:27.801720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.803395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.803420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:18.251 [2024-11-27 19:33:27.803427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:35:18.251 [2024-11-27 19:33:27.803433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.803490] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:18.251 [2024-11-27 19:33:27.803828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.803840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:18.251 [2024-11-27 19:33:27.803847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:35:18.251 [2024-11-27 19:33:27.803852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.803872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.803879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:18.251 [2024-11-27 19:33:27.803884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:18.251 [2024-11-27 19:33:27.803890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.803912] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:18.251 [2024-11-27 19:33:27.803919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.251 [2024-11-27 19:33:27.803925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:18.251 [2024-11-27 19:33:27.803930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:18.251 [2024-11-27 19:33:27.803936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.251 [2024-11-27 19:33:27.822046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.252 [2024-11-27 19:33:27.822176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:18.252 [2024-11-27 19:33:27.822190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.098 ms 00:35:18.252 [2024-11-27 19:33:27.822197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.252 [2024-11-27 19:33:27.822250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.252 [2024-11-27 19:33:27.822258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:18.252 [2024-11-27 19:33:27.822264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:18.252 [2024-11-27 19:33:27.822270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.252 [2024-11-27 19:33:27.822987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.343 ms, result 0 00:35:19.639  [2024-11-27T19:33:30.216Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-27T19:33:31.161Z] Copying: 37/1024 [MB] (16 MBps) [2024-11-27T19:33:32.106Z] Copying: 58/1024 [MB] (21 MBps) [2024-11-27T19:33:33.048Z] Copying: 75/1024 [MB] (16 MBps) [2024-11-27T19:33:33.992Z] Copying: 93/1024 [MB] (18 MBps) [2024-11-27T19:33:35.378Z] Copying: 115/1024 [MB] (21 MBps) [2024-11-27T19:33:36.320Z] Copying: 137/1024 [MB] (22 MBps) [2024-11-27T19:33:37.265Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-27T19:33:38.210Z] Copying: 166/1024 [MB] (16 MBps) [2024-11-27T19:33:39.218Z] Copying: 176/1024 [MB] (10 MBps) [2024-11-27T19:33:40.161Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-27T19:33:41.106Z] Copying: 205/1024 [MB] (17 MBps) [2024-11-27T19:33:42.051Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-27T19:33:42.996Z] Copying: 226/1024 [MB] (11 MBps) [2024-11-27T19:33:44.384Z] Copying: 243/1024 [MB] (16 MBps) [2024-11-27T19:33:45.329Z] Copying: 265/1024 [MB] (22 MBps) [2024-11-27T19:33:46.273Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-27T19:33:47.220Z] Copying: 289/1024 [MB] (12 MBps) [2024-11-27T19:33:48.164Z] Copying: 311/1024 [MB] (21 MBps) [2024-11-27T19:33:49.109Z] Copying: 326/1024 [MB] (15 MBps) [2024-11-27T19:33:50.055Z] Copying: 350/1024 [MB] (24 MBps) [2024-11-27T19:33:50.999Z] Copying: 366/1024 [MB] (15 MBps) [2024-11-27T19:33:52.386Z] Copying: 386/1024 [MB] (20 MBps) [2024-11-27T19:33:53.329Z] Copying: 407/1024 [MB] (21 MBps) [2024-11-27T19:33:54.273Z] Copying: 421/1024 [MB] (13 MBps) [2024-11-27T19:33:55.217Z] Copying: 440/1024 [MB] (19 MBps) [2024-11-27T19:33:56.161Z] Copying: 459/1024 [MB] (18 MBps) [2024-11-27T19:33:57.104Z] Copying: 477/1024 [MB] (18 MBps) [2024-11-27T19:33:58.048Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-27T19:33:58.993Z] Copying: 504/1024 [MB] (16 MBps) [2024-11-27T19:34:00.381Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-27T19:34:01.325Z] Copying: 530/1024 [MB] (15 MBps) [2024-11-27T19:34:02.270Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-27T19:34:03.215Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-27T19:34:04.157Z] Copying: 572/1024 [MB] (20 MBps) [2024-11-27T19:34:05.102Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-27T19:34:06.046Z] Copying: 598/1024 [MB] (14 MBps) [2024-11-27T19:34:06.991Z] Copying: 615/1024 [MB] (16 MBps) [2024-11-27T19:34:07.999Z] Copying: 627/1024 [MB] (12 MBps) [2024-11-27T19:34:09.388Z] Copying: 649/1024 [MB] (21 MBps) [2024-11-27T19:34:10.329Z] Copying: 670/1024 [MB] (21 MBps) [2024-11-27T19:34:11.269Z] Copying: 690/1024 [MB] (19 MBps) [2024-11-27T19:34:12.214Z] Copying: 707/1024 [MB] (16 MBps) [2024-11-27T19:34:13.159Z] Copying: 726/1024 [MB] (19 MBps) [2024-11-27T19:34:14.101Z] Copying: 745/1024 [MB] (18 MBps) [2024-11-27T19:34:15.046Z] Copying: 760/1024 [MB] (14 MBps) [2024-11-27T19:34:15.992Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-27T19:34:17.381Z] Copying: 794/1024 [MB] (15 MBps) [2024-11-27T19:34:18.326Z] Copying: 808/1024 [MB] (14 MBps) [2024-11-27T19:34:19.270Z] Copying: 823/1024 [MB] (14 MBps) [2024-11-27T19:34:20.215Z] Copying: 843/1024 [MB] (20 MBps) [2024-11-27T19:34:21.160Z] Copying: 855/1024 [MB] (12 MBps) [2024-11-27T19:34:22.106Z] Copying: 872/1024 [MB] (17 MBps) [2024-11-27T19:34:23.053Z] Copying: 890/1024 [MB] (18 MBps) [2024-11-27T19:34:23.997Z] Copying: 905/1024 [MB] (14 MBps) [2024-11-27T19:34:25.386Z] Copying: 924/1024 [MB] (18 MBps) [2024-11-27T19:34:26.330Z] Copying: 945/1024 [MB] (20 MBps) [2024-11-27T19:34:27.275Z] Copying: 958/1024 [MB] (13 MBps) [2024-11-27T19:34:28.222Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-27T19:34:29.164Z] Copying: 990/1024 [MB] (16 MBps) [2024-11-27T19:34:30.110Z] Copying: 1008/1024 [MB] (17 MBps) [2024-11-27T19:34:30.110Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 19:34:30.054041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.475 [2024-11-27 19:34:30.054214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:20.475 [2024-11-27 19:34:30.054251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:36:20.475 [2024-11-27 19:34:30.054275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.475 [2024-11-27 19:34:30.054333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:20.475 [2024-11-27 19:34:30.057980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.475 [2024-11-27 19:34:30.058033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:20.475 [2024-11-27 19:34:30.058047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:36:20.475 [2024-11-27 19:34:30.058065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.475 [2024-11-27 19:34:30.058328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.475 [2024-11-27 19:34:30.058340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:20.475 [2024-11-27 19:34:30.058350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:36:20.475 [2024-11-27 19:34:30.058358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.475 [2024-11-27 19:34:30.058389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.475 [2024-11-27 19:34:30.058400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:20.475 [2024-11-27 19:34:30.058410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:20.475 [2024-11-27 19:34:30.058418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.475 [2024-11-27 19:34:30.058478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.475 [2024-11-27 19:34:30.058490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:20.475 [2024-11-27 19:34:30.058500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:36:20.475 [2024-11-27 19:34:30.058508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.475 [2024-11-27 19:34:30.058524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:20.475 [2024-11-27 19:34:30.058537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:20.475 [2024-11-27 19:34:30.058548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:20.475 [2024-11-27 19:34:30.058956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.058964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.058972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.058980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.058988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.058998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:20.476 [2024-11-27 19:34:30.059559] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:20.476 [2024-11-27 19:34:30.059569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c01b3f1-2509-48ab-8fa9-84eb47f7df7f 00:36:20.476 [2024-11-27 19:34:30.059578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:20.476 [2024-11-27 19:34:30.059587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:36:20.476 [2024-11-27 19:34:30.059595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:36:20.476 [2024-11-27 19:34:30.059607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:36:20.476 [2024-11-27 19:34:30.059615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:20.476 [2024-11-27 19:34:30.059623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:20.476 [2024-11-27 19:34:30.059631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:20.476 [2024-11-27 19:34:30.059637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:20.476 [2024-11-27 19:34:30.059644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:20.476 [2024-11-27 19:34:30.059652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.476 [2024-11-27 19:34:30.059661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:20.476 [2024-11-27 19:34:30.059669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:36:20.476 [2024-11-27 19:34:30.059677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.476 [2024-11-27 19:34:30.074172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.476 [2024-11-27 19:34:30.074430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:20.476 [2024-11-27 19:34:30.074461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:36:20.476 [2024-11-27 19:34:30.074471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.476 [2024-11-27 19:34:30.074871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:20.476 [2024-11-27 19:34:30.074888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:20.476 [2024-11-27 19:34:30.074900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:36:20.476 [2024-11-27 19:34:30.074908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.738 [2024-11-27 19:34:30.111994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.738 [2024-11-27 19:34:30.112052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:20.738 [2024-11-27 19:34:30.112065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.738 [2024-11-27 19:34:30.112075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.738 [2024-11-27 19:34:30.112176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.112189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:20.739 [2024-11-27 19:34:30.112199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.112208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.112272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.112289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:20.739 [2024-11-27 19:34:30.112299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.112308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.112325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.112334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:20.739 [2024-11-27 19:34:30.112343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.112351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.199272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.199331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:20.739 [2024-11-27 19:34:30.199346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.199355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.269353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.269583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:20.739 [2024-11-27 19:34:30.269605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.269614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.269711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.269722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:20.739 [2024-11-27 19:34:30.269735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.269744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.269785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.269794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:20.739 [2024-11-27 19:34:30.269803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.269811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.269897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.269906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:20.739 [2024-11-27 19:34:30.269915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.269926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.269953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.269962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:20.739 [2024-11-27 19:34:30.269972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.269980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.270021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.270030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:20.739 [2024-11-27 19:34:30.270039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.270050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.270096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:20.739 [2024-11-27 19:34:30.270106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:20.739 [2024-11-27 19:34:30.270114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:20.739 [2024-11-27 19:34:30.270152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:20.739 [2024-11-27 19:34:30.270289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 216.240 ms, result 0 00:36:21.683 00:36:21.683 00:36:21.683 19:34:31 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:23.600 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:23.600 19:34:32 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:23.600 19:34:32 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:23.600 19:34:32 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:23.600 Process with pid 84002 is not found 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84002 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84002 ']' 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84002 00:36:23.600 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84002) - No such process 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84002 is not found' 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:23.600 Remove shared memory files 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_band_md /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_l2p_l1 /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_l2p_l2 /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_l2p_l2_ctx /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_nvc_md /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_p2l_pool /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_sb /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_sb_shm /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_trim_bitmap /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_trim_log /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_trim_md /dev/hugepages/ftl_4c01b3f1-2509-48ab-8fa9-84eb47f7df7f_vmap 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:23.600 ************************************ 00:36:23.600 END TEST ftl_restore_fast 00:36:23.600 ************************************ 00:36:23.600 00:36:23.600 real 4m39.577s 00:36:23.600 user 4m27.802s 00:36:23.600 sys 0m11.403s 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:23.600 19:34:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:23.600 Process with pid 75072 is not found 00:36:23.600 19:34:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:23.600 19:34:33 ftl -- ftl/ftl.sh@14 -- # killprocess 75072 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@954 -- # '[' -z 75072 ']' 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@958 -- # kill -0 75072 00:36:23.600 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75072) - No such process 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75072 is not found' 00:36:23.600 19:34:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:23.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.600 19:34:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86838 00:36:23.600 19:34:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86838 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 86838 ']' 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.600 19:34:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.601 19:34:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:23.601 19:34:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.601 19:34:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 [2024-11-27 19:34:33.219778] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:36:23.601 [2024-11-27 19:34:33.219890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86838 ] 00:36:23.862 [2024-11-27 19:34:33.381226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.862 [2024-11-27 19:34:33.488324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.805 19:34:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.805 19:34:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:24.805 19:34:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:25.066 nvme0n1 00:36:25.066 19:34:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:25.066 19:34:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:25.066 19:34:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:25.469 19:34:34 ftl -- ftl/common.sh@28 -- # stores=b1023053-c6a6-4957-b674-0c3131445d0c 00:36:25.469 19:34:34 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:25.469 19:34:34 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1023053-c6a6-4957-b674-0c3131445d0c 00:36:25.469 19:34:34 ftl -- ftl/ftl.sh@23 -- # killprocess 86838 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@954 -- # '[' -z 86838 ']' 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@958 -- # kill -0 86838 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@959 -- # uname 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86838 00:36:25.469 killing process with pid 86838 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86838' 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@973 -- # kill 86838 00:36:25.469 19:34:34 ftl -- common/autotest_common.sh@978 -- # wait 86838 00:36:26.878 19:34:36 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:27.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:27.139 Waiting for block devices as requested 00:36:27.139 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.139 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.400 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.400 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.689 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:32.689 19:34:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:32.689 Remove shared memory files 00:36:32.689 19:34:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:32.689 19:34:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:32.689 19:34:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:32.689 19:34:42 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:32.689 19:34:42 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:32.689 19:34:42 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:32.689 ************************************ 00:36:32.689 END TEST ftl 00:36:32.689 ************************************ 00:36:32.689 00:36:32.689 real 17m58.874s 00:36:32.689 user 19m48.205s 00:36:32.689 sys 1m27.505s 00:36:32.689 19:34:42 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.689 19:34:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:32.689 19:34:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:32.689 19:34:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:32.689 19:34:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:32.689 19:34:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:32.689 19:34:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:32.689 19:34:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:32.689 19:34:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:32.689 19:34:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:32.689 19:34:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:32.689 19:34:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:32.689 19:34:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:32.689 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:36:32.689 19:34:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:32.689 19:34:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:32.689 19:34:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:32.689 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:36:34.077 INFO: APP EXITING 00:36:34.077 INFO: killing all VMs 00:36:34.077 INFO: killing vhost app 00:36:34.077 INFO: EXIT DONE 00:36:34.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:34.598 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:34.598 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:34.858 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:34.858 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:35.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:35.690 Cleaning 00:36:35.690 Removing: /var/run/dpdk/spdk0/config 00:36:35.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:35.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:35.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:35.690 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:35.690 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:35.690 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:35.690 Removing: /var/run/dpdk/spdk0 00:36:35.690 Removing: /var/run/dpdk/spdk_pid56968 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57171 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57378 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57471 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57511 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57633 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57651 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57845 00:36:35.690 Removing: /var/run/dpdk/spdk_pid57931 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58022 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58127 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58219 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58253 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58289 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58360 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58466 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58896 00:36:35.690 Removing: /var/run/dpdk/spdk_pid58955 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59007 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59023 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59125 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59130 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59232 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59248 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59301 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59319 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59372 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59389 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59545 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59581 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59665 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59837 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59915 00:36:35.690 Removing: /var/run/dpdk/spdk_pid59952 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60396 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60494 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60605 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60660 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60691 00:36:35.690 Removing: /var/run/dpdk/spdk_pid60775 00:36:35.690 Removing: /var/run/dpdk/spdk_pid61389 00:36:35.690 Removing: /var/run/dpdk/spdk_pid61426 00:36:35.690 Removing: /var/run/dpdk/spdk_pid61897 00:36:35.690 Removing: /var/run/dpdk/spdk_pid61995 00:36:35.690 Removing: /var/run/dpdk/spdk_pid62116 00:36:35.690 Removing: /var/run/dpdk/spdk_pid62169 00:36:35.690 Removing: /var/run/dpdk/spdk_pid62200 00:36:35.690 Removing: /var/run/dpdk/spdk_pid62220 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64067 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64194 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64203 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64220 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64259 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64263 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64275 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64321 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64325 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64337 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64382 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64386 00:36:35.690 Removing: /var/run/dpdk/spdk_pid64398 00:36:35.690 Removing: /var/run/dpdk/spdk_pid65799 00:36:35.690 Removing: /var/run/dpdk/spdk_pid65896 00:36:35.690 Removing: /var/run/dpdk/spdk_pid67301 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69031 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69104 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69186 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69290 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69383 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69480 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69554 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69629 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69733 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69825 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69926 00:36:35.690 Removing: /var/run/dpdk/spdk_pid69995 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70070 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70180 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70266 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70367 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70440 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70511 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70625 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70718 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70808 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70888 00:36:35.690 Removing: /var/run/dpdk/spdk_pid70962 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71036 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71116 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71220 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71311 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71400 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71474 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71554 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71628 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71702 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71810 00:36:35.690 Removing: /var/run/dpdk/spdk_pid71904 00:36:35.690 Removing: /var/run/dpdk/spdk_pid72048 00:36:35.690 Removing: /var/run/dpdk/spdk_pid72332 00:36:35.690 Removing: /var/run/dpdk/spdk_pid72363 00:36:35.690 Removing: /var/run/dpdk/spdk_pid72808 00:36:35.952 Removing: /var/run/dpdk/spdk_pid72997 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73091 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73211 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73260 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73280 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73599 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73659 00:36:35.952 Removing: /var/run/dpdk/spdk_pid73732 00:36:35.952 Removing: /var/run/dpdk/spdk_pid74131 00:36:35.952 Removing: /var/run/dpdk/spdk_pid74271 00:36:35.953 Removing: /var/run/dpdk/spdk_pid75072 00:36:35.953 Removing: /var/run/dpdk/spdk_pid75204 00:36:35.953 Removing: /var/run/dpdk/spdk_pid75368 00:36:35.953 Removing: /var/run/dpdk/spdk_pid75465 00:36:35.953 Removing: /var/run/dpdk/spdk_pid75773 00:36:35.953 Removing: /var/run/dpdk/spdk_pid76051 00:36:35.953 Removing: /var/run/dpdk/spdk_pid76409 00:36:35.953 Removing: /var/run/dpdk/spdk_pid76585 00:36:35.953 Removing: /var/run/dpdk/spdk_pid76765 00:36:35.953 Removing: /var/run/dpdk/spdk_pid76818 00:36:35.953 Removing: /var/run/dpdk/spdk_pid77006 00:36:35.953 Removing: /var/run/dpdk/spdk_pid77037 00:36:35.953 Removing: /var/run/dpdk/spdk_pid77095 00:36:35.953 Removing: /var/run/dpdk/spdk_pid77339 00:36:35.953 Removing: /var/run/dpdk/spdk_pid77575 00:36:35.953 Removing: /var/run/dpdk/spdk_pid78229 00:36:35.953 Removing: /var/run/dpdk/spdk_pid78925 00:36:35.953 Removing: /var/run/dpdk/spdk_pid79605 00:36:35.953 Removing: /var/run/dpdk/spdk_pid80336 00:36:35.953 Removing: /var/run/dpdk/spdk_pid80489 00:36:35.953 Removing: /var/run/dpdk/spdk_pid80582 00:36:35.953 Removing: /var/run/dpdk/spdk_pid81019 00:36:35.953 Removing: /var/run/dpdk/spdk_pid81075 00:36:35.953 Removing: /var/run/dpdk/spdk_pid81725 00:36:35.953 Removing: /var/run/dpdk/spdk_pid82182 00:36:35.953 Removing: /var/run/dpdk/spdk_pid82957 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83090 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83133 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83191 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83247 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83311 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83495 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83575 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83643 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83732 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83766 00:36:35.953 Removing: /var/run/dpdk/spdk_pid83844 00:36:35.953 Removing: /var/run/dpdk/spdk_pid84002 00:36:35.953 Removing: /var/run/dpdk/spdk_pid84237 00:36:35.953 Removing: /var/run/dpdk/spdk_pid84797 00:36:35.953 Removing: /var/run/dpdk/spdk_pid85496 00:36:35.953 Removing: /var/run/dpdk/spdk_pid86166 00:36:35.953 Removing: /var/run/dpdk/spdk_pid86838 00:36:35.953 Clean 00:36:35.953 19:34:45 -- common/autotest_common.sh@1453 -- # return 0 00:36:35.953 19:34:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:35.953 19:34:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.953 19:34:45 -- common/autotest_common.sh@10 -- # set +x 00:36:35.953 19:34:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:35.953 19:34:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.953 19:34:45 -- common/autotest_common.sh@10 -- # set +x 00:36:36.214 19:34:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:36.214 19:34:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:36.214 19:34:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:36.214 19:34:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:36.214 19:34:45 -- spdk/autotest.sh@398 -- # hostname 00:36:36.214 19:34:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:36.214 geninfo: WARNING: invalid characters removed from testname! 00:37:02.803 19:35:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:04.729 19:35:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:07.274 19:35:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:09.820 19:35:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:11.734 19:35:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:14.281 19:35:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:16.191 19:35:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:16.191 19:35:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:16.191 19:35:25 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:16.191 19:35:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:16.191 19:35:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:16.191 19:35:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:16.191 + [[ -n 5027 ]] 00:37:16.191 + sudo kill 5027 00:37:16.202 [Pipeline] } 00:37:16.218 [Pipeline] // timeout 00:37:16.225 [Pipeline] } 00:37:16.240 [Pipeline] // stage 00:37:16.245 [Pipeline] } 00:37:16.260 [Pipeline] // catchError 00:37:16.270 [Pipeline] stage 00:37:16.273 [Pipeline] { (Stop VM) 00:37:16.286 [Pipeline] sh 00:37:16.571 + vagrant halt 00:37:19.113 ==> default: Halting domain... 00:37:24.416 [Pipeline] sh 00:37:24.699 + vagrant destroy -f 00:37:27.244 ==> default: Removing domain... 00:37:27.829 [Pipeline] sh 00:37:28.115 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:28.126 [Pipeline] } 00:37:28.143 [Pipeline] // stage 00:37:28.149 [Pipeline] } 00:37:28.163 [Pipeline] // dir 00:37:28.168 [Pipeline] } 00:37:28.183 [Pipeline] // wrap 00:37:28.190 [Pipeline] } 00:37:28.202 [Pipeline] // catchError 00:37:28.212 [Pipeline] stage 00:37:28.214 [Pipeline] { (Epilogue) 00:37:28.227 [Pipeline] sh 00:37:28.554 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:33.845 [Pipeline] catchError 00:37:33.848 [Pipeline] { 00:37:33.864 [Pipeline] sh 00:37:34.155 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:34.155 Artifacts sizes are good 00:37:34.166 [Pipeline] } 00:37:34.182 [Pipeline] // catchError 00:37:34.196 [Pipeline] archiveArtifacts 00:37:34.203 Archiving artifacts 00:37:34.301 [Pipeline] cleanWs 00:37:34.314 [WS-CLEANUP] Deleting project workspace... 00:37:34.314 [WS-CLEANUP] Deferred wipeout is used... 00:37:34.322 [WS-CLEANUP] done 00:37:34.324 [Pipeline] } 00:37:34.340 [Pipeline] // stage 00:37:34.345 [Pipeline] } 00:37:34.360 [Pipeline] // node 00:37:34.365 [Pipeline] End of Pipeline 00:37:34.405 Finished: SUCCESS